00:00:00.000 Started by upstream project "autotest-per-patch" build number 132577 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.017 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.018 The recommended git tool is: git 00:00:00.018 using credential 00000000-0000-0000-0000-000000000002 00:00:00.020 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.036 Fetching changes from the remote Git repository 00:00:00.045 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.061 Using shallow fetch with depth 1 00:00:00.061 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.061 > git --version # timeout=10 00:00:00.077 > git --version # 'git version 2.39.2' 00:00:00.077 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.103 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.104 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.370 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.381 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.392 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.392 > git config core.sparsecheckout # timeout=10 00:00:03.402 > git read-tree -mu HEAD # timeout=10 00:00:03.417 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.437 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.437 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.527 [Pipeline] Start of Pipeline 00:00:03.540 [Pipeline] library 00:00:03.541 Loading library shm_lib@master 00:00:08.027 Library shm_lib@master is cached. Copying from home. 00:00:08.070 [Pipeline] node 00:00:08.259 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:08.261 [Pipeline] { 00:00:08.276 [Pipeline] catchError 00:00:08.279 [Pipeline] { 00:00:08.298 [Pipeline] wrap 00:00:08.305 [Pipeline] { 00:00:08.314 [Pipeline] stage 00:00:08.316 [Pipeline] { (Prologue) 00:00:08.342 [Pipeline] echo 00:00:08.370 Node: VM-host-WFP7 00:00:08.380 [Pipeline] cleanWs 00:00:08.390 [WS-CLEANUP] Deleting project workspace... 00:00:08.390 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.396 [WS-CLEANUP] done 00:00:08.754 [Pipeline] setCustomBuildProperty 00:00:08.817 [Pipeline] httpRequest 00:00:11.864 [Pipeline] echo 00:00:11.866 Sorcerer 10.211.164.20 is dead 00:00:11.874 [Pipeline] httpRequest 00:00:12.819 [Pipeline] echo 00:00:12.821 Sorcerer 10.211.164.101 is alive 00:00:12.832 [Pipeline] retry 00:00:12.835 [Pipeline] { 00:00:12.850 [Pipeline] httpRequest 00:00:12.855 HttpMethod: GET 00:00:12.856 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.857 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.858 Response Code: HTTP/1.1 200 OK 00:00:12.859 Success: Status code 200 is in the accepted range: 200,404 00:00:12.860 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.010 [Pipeline] } 00:00:13.031 [Pipeline] // retry 00:00:13.039 [Pipeline] sh 00:00:13.323 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.342 [Pipeline] httpRequest 00:00:14.581 [Pipeline] echo 00:00:14.583 Sorcerer 10.211.164.101 is alive 00:00:14.593 [Pipeline] retry 00:00:14.595 [Pipeline] { 00:00:14.611 [Pipeline] httpRequest 00:00:14.616 HttpMethod: GET 00:00:14.616 URL: http://10.211.164.101/packages/spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:00:14.617 Sending request to url: http://10.211.164.101/packages/spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:00:14.619 Response Code: HTTP/1.1 404 Not Found 00:00:14.619 Success: Status code 404 is in the accepted range: 200,404 00:00:14.620 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:00:14.625 [Pipeline] } 00:00:14.642 [Pipeline] // retry 00:00:14.650 [Pipeline] sh 00:00:14.934 + rm -f spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:00:14.948 [Pipeline] retry 00:00:14.950 [Pipeline] { 00:00:14.967 [Pipeline] checkout 00:00:14.974 The recommended git tool is: NONE 00:00:14.985 using credential 00000000-0000-0000-0000-000000000002 00:00:14.987 Wiping out workspace first. 00:00:14.997 Cloning the remote Git repository 00:00:15.000 Honoring refspec on initial clone 00:00:15.003 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:15.003 > git init /var/jenkins/workspace/raid-vg-autotest_2/spdk # timeout=10 00:00:15.010 Using reference repository: /var/ci_repos/spdk_multi 00:00:15.010 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:15.010 > git --version # timeout=10 00:00:15.014 > git --version # 'git version 2.25.1' 00:00:15.014 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:15.018 Setting http proxy: proxy-dmz.intel.com:911 00:00:15.018 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/76/25476/2 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:28.100 Avoid second fetch 00:00:28.184 Checking out Revision 5977028896021975fabe08ce8485b4d939e7798e (FETCH_HEAD) 00:00:28.385 Commit message: "lib/ftl: Add explicit support for write unit sizes of base device" 00:00:28.393 First time build. Skipping changelog. 00:00:28.063 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:28.068 > git config --add remote.origin.fetch refs/changes/76/25476/2 # timeout=10 00:00:28.072 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:28.101 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:28.160 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:28.185 > git config core.sparsecheckout # timeout=10 00:00:28.189 > git checkout -f 5977028896021975fabe08ce8485b4d939e7798e # timeout=10 00:00:28.386 > git rev-list --no-walk c25d82eb439cb2d3a69cd1b92f47ccb3bf8c8f01 # timeout=10 00:00:28.397 > git remote # timeout=10 00:00:28.401 > git submodule init # timeout=10 00:00:28.459 > git submodule sync # timeout=10 00:00:28.516 > git config --get remote.origin.url # timeout=10 00:00:28.524 > git submodule init # timeout=10 00:00:28.577 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:28.582 > git config --get submodule.dpdk.url # timeout=10 00:00:28.586 > git remote # timeout=10 00:00:28.590 > git config --get remote.origin.url # timeout=10 00:00:28.593 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:28.597 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:28.601 > git remote # timeout=10 00:00:28.605 > git config --get remote.origin.url # timeout=10 00:00:28.609 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:28.613 > git config --get submodule.isa-l.url # timeout=10 00:00:28.617 > git remote # timeout=10 00:00:28.621 > git config --get remote.origin.url # timeout=10 00:00:28.625 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:28.629 > git config --get submodule.ocf.url # timeout=10 00:00:28.633 > git remote # timeout=10 00:00:28.637 > git config --get remote.origin.url # timeout=10 00:00:28.641 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:28.645 > git config --get submodule.libvfio-user.url # timeout=10 00:00:28.649 > git remote # timeout=10 00:00:28.653 > git config --get remote.origin.url # timeout=10 00:00:28.657 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:28.661 > git config --get submodule.xnvme.url # timeout=10 00:00:28.665 > git remote # timeout=10 00:00:28.669 > git config --get remote.origin.url # timeout=10 00:00:28.673 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:28.677 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:28.681 > git remote # timeout=10 00:00:28.685 > git config --get remote.origin.url # timeout=10 00:00:28.689 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:28.696 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.696 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.696 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.696 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.696 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.697 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.697 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:28.700 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.700 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:28.700 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.700 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.700 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:28.700 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:28.700 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.701 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:28.701 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.701 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:28.701 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.701 Setting http proxy: proxy-dmz.intel.com:911 00:00:28.701 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:28.701 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:01:10.488 [Pipeline] dir 00:01:10.489 Running in /var/jenkins/workspace/raid-vg-autotest_2/spdk 00:01:10.490 [Pipeline] { 00:01:10.505 [Pipeline] sh 00:01:10.789 ++ nproc 00:01:10.789 + threads=80 00:01:10.789 + git repack -a -d --threads=80 00:01:17.359 + git submodule foreach git repack -a -d --threads=80 00:01:17.359 Entering 'dpdk' 00:01:20.649 Entering 'intel-ipsec-mb' 00:01:20.649 Entering 'isa-l' 00:01:20.649 Entering 'isa-l-crypto' 00:01:20.649 Entering 'libvfio-user' 00:01:20.649 Entering 'ocf' 00:01:20.649 Entering 'xnvme' 00:01:20.908 + find .git -type f -name alternates -print -delete 00:01:20.908 .git/modules/libvfio-user/objects/info/alternates 00:01:20.908 .git/modules/isa-l-crypto/objects/info/alternates 00:01:20.908 .git/modules/ocf/objects/info/alternates 00:01:20.908 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:20.908 .git/modules/isa-l/objects/info/alternates 00:01:20.908 .git/modules/xnvme/objects/info/alternates 00:01:20.908 .git/modules/dpdk/objects/info/alternates 00:01:20.908 .git/objects/info/alternates 00:01:20.919 [Pipeline] } 00:01:20.938 [Pipeline] // dir 00:01:20.944 [Pipeline] } 00:01:20.960 [Pipeline] // retry 00:01:20.968 [Pipeline] sh 00:01:21.251 + hash pigz 00:01:21.251 + tar -czf spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz spdk 00:01:36.155 [Pipeline] retry 00:01:36.157 [Pipeline] { 00:01:36.172 [Pipeline] httpRequest 00:01:36.179 HttpMethod: PUT 00:01:36.180 URL: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:01:36.181 Sending request to url: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:01:44.894 Response Code: HTTP/1.1 200 OK 00:01:44.921 Success: Status code 200 is in the accepted range: 200 00:01:44.924 [Pipeline] } 00:01:44.944 [Pipeline] // retry 00:01:44.953 [Pipeline] echo 00:01:44.955 00:01:44.955 Locking 00:01:44.955 Waited 6s for lock 00:01:44.955 File already exists: /storage/packages/spdk_5977028896021975fabe08ce8485b4d939e7798e.tar.gz 00:01:44.955 00:01:44.959 [Pipeline] sh 00:01:45.239 + git -C spdk log --oneline -n5 00:01:45.239 597702889 lib/ftl: Add explicit support for write unit sizes of base device 00:01:45.239 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:01:45.239 5592070b3 doc: update nvmf_tracing.md 00:01:45.239 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:01:45.239 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:45.256 [Pipeline] writeFile 00:01:45.273 [Pipeline] sh 00:01:45.555 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:45.566 [Pipeline] sh 00:01:45.849 + cat autorun-spdk.conf 00:01:45.849 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.849 SPDK_RUN_ASAN=1 00:01:45.849 SPDK_RUN_UBSAN=1 00:01:45.849 SPDK_TEST_RAID=1 00:01:45.849 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:45.856 RUN_NIGHTLY=0 00:01:45.858 [Pipeline] } 00:01:45.873 [Pipeline] // stage 00:01:45.893 [Pipeline] stage 00:01:45.895 [Pipeline] { (Run VM) 00:01:45.908 [Pipeline] sh 00:01:46.187 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:46.187 + echo 'Start stage prepare_nvme.sh' 00:01:46.187 Start stage prepare_nvme.sh 00:01:46.187 + [[ -n 4 ]] 00:01:46.187 + disk_prefix=ex4 00:01:46.187 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:01:46.187 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:01:46.187 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:01:46.187 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.187 ++ SPDK_RUN_ASAN=1 00:01:46.187 ++ SPDK_RUN_UBSAN=1 00:01:46.187 ++ SPDK_TEST_RAID=1 00:01:46.187 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:46.187 ++ RUN_NIGHTLY=0 00:01:46.187 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:01:46.187 + nvme_files=() 00:01:46.187 + declare -A nvme_files 00:01:46.187 + backend_dir=/var/lib/libvirt/images/backends 00:01:46.187 + nvme_files['nvme.img']=5G 00:01:46.187 + nvme_files['nvme-cmb.img']=5G 00:01:46.187 + nvme_files['nvme-multi0.img']=4G 00:01:46.187 + nvme_files['nvme-multi1.img']=4G 00:01:46.187 + nvme_files['nvme-multi2.img']=4G 00:01:46.188 + nvme_files['nvme-openstack.img']=8G 00:01:46.188 + nvme_files['nvme-zns.img']=5G 00:01:46.188 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:46.188 + (( SPDK_TEST_FTL == 1 )) 00:01:46.188 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:46.188 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:46.188 + for nvme in "${!nvme_files[@]}" 00:01:46.188 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:46.188 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:46.188 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:46.446 + echo 'End stage prepare_nvme.sh' 00:01:46.446 End stage prepare_nvme.sh 00:01:46.457 [Pipeline] sh 00:01:46.734 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:46.734 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:46.734 00:01:46.734 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:01:46.734 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:01:46.734 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:01:46.734 HELP=0 00:01:46.734 DRY_RUN=0 00:01:46.734 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:46.734 NVME_DISKS_TYPE=nvme,nvme, 00:01:46.734 NVME_AUTO_CREATE=0 00:01:46.734 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:46.734 NVME_CMB=,, 00:01:46.734 NVME_PMR=,, 00:01:46.734 NVME_ZNS=,, 00:01:46.734 NVME_MS=,, 00:01:46.734 NVME_FDP=,, 00:01:46.734 SPDK_VAGRANT_DISTRO=fedora39 00:01:46.734 SPDK_VAGRANT_VMCPU=10 00:01:46.734 SPDK_VAGRANT_VMRAM=12288 00:01:46.734 SPDK_VAGRANT_PROVIDER=libvirt 00:01:46.734 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:46.734 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:46.734 SPDK_OPENSTACK_NETWORK=0 00:01:46.734 VAGRANT_PACKAGE_BOX=0 00:01:46.734 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:46.734 FORCE_DISTRO=true 00:01:46.734 VAGRANT_BOX_VERSION= 00:01:46.734 EXTRA_VAGRANTFILES= 00:01:46.734 NIC_MODEL=virtio 00:01:46.734 00:01:46.734 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:01:46.734 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:01:49.264 Bringing machine 'default' up with 'libvirt' provider... 00:01:49.830 ==> default: Creating image (snapshot of base box volume). 00:01:49.830 ==> default: Creating domain with the following settings... 00:01:49.830 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732700330_3f089598cc99894bbc79 00:01:49.830 ==> default: -- Domain type: kvm 00:01:49.830 ==> default: -- Cpus: 10 00:01:49.830 ==> default: -- Feature: acpi 00:01:49.830 ==> default: -- Feature: apic 00:01:49.830 ==> default: -- Feature: pae 00:01:49.830 ==> default: -- Memory: 12288M 00:01:49.830 ==> default: -- Memory Backing: hugepages: 00:01:49.831 ==> default: -- Management MAC: 00:01:49.831 ==> default: -- Loader: 00:01:49.831 ==> default: -- Nvram: 00:01:49.831 ==> default: -- Base box: spdk/fedora39 00:01:49.831 ==> default: -- Storage pool: default 00:01:49.831 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732700330_3f089598cc99894bbc79.img (20G) 00:01:49.831 ==> default: -- Volume Cache: default 00:01:49.831 ==> default: -- Kernel: 00:01:49.831 ==> default: -- Initrd: 00:01:49.831 ==> default: -- Graphics Type: vnc 00:01:49.831 ==> default: -- Graphics Port: -1 00:01:49.831 ==> default: -- Graphics IP: 127.0.0.1 00:01:49.831 ==> default: -- Graphics Password: Not defined 00:01:49.831 ==> default: -- Video Type: cirrus 00:01:49.831 ==> default: -- Video VRAM: 9216 00:01:49.831 ==> default: -- Sound Type: 00:01:49.831 ==> default: -- Keymap: en-us 00:01:49.831 ==> default: -- TPM Path: 00:01:49.831 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:49.831 ==> default: -- Command line args: 00:01:49.831 ==> default: -> value=-device, 00:01:49.831 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:49.831 ==> default: -> value=-drive, 00:01:49.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:49.831 ==> default: -> value=-device, 00:01:49.831 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.831 ==> default: -> value=-device, 00:01:49.831 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:49.831 ==> default: -> value=-drive, 00:01:49.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:49.831 ==> default: -> value=-device, 00:01:49.831 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.831 ==> default: -> value=-drive, 00:01:49.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:49.831 ==> default: -> value=-device, 00:01:49.831 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:49.831 ==> default: -> value=-drive, 00:01:49.831 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:49.831 ==> default: -> value=-device, 00:01:49.831 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:50.089 ==> default: Creating shared folders metadata... 00:01:50.089 ==> default: Starting domain. 00:01:51.468 ==> default: Waiting for domain to get an IP address... 00:02:09.544 ==> default: Waiting for SSH to become available... 00:02:09.544 ==> default: Configuring and enabling network interfaces... 00:02:14.813 default: SSH address: 192.168.121.232:22 00:02:14.813 default: SSH username: vagrant 00:02:14.813 default: SSH auth method: private key 00:02:18.108 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:28.080 ==> default: Mounting SSHFS shared folder... 00:02:29.015 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:29.015 ==> default: Checking Mount.. 00:02:30.929 ==> default: Folder Successfully Mounted! 00:02:30.929 ==> default: Running provisioner: file... 00:02:31.513 default: ~/.gitconfig => .gitconfig 00:02:32.081 00:02:32.081 SUCCESS! 00:02:32.081 00:02:32.081 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:02:32.081 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:32.081 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:02:32.081 00:02:32.091 [Pipeline] } 00:02:32.108 [Pipeline] // stage 00:02:32.118 [Pipeline] dir 00:02:32.119 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:02:32.121 [Pipeline] { 00:02:32.135 [Pipeline] catchError 00:02:32.137 [Pipeline] { 00:02:32.151 [Pipeline] sh 00:02:32.437 + vagrant ssh-config --host vagrant 00:02:32.437 + sed -ne /^Host/,$p 00:02:32.437 + tee ssh_conf 00:02:35.729 Host vagrant 00:02:35.729 HostName 192.168.121.232 00:02:35.729 User vagrant 00:02:35.729 Port 22 00:02:35.729 UserKnownHostsFile /dev/null 00:02:35.729 StrictHostKeyChecking no 00:02:35.729 PasswordAuthentication no 00:02:35.730 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:35.730 IdentitiesOnly yes 00:02:35.730 LogLevel FATAL 00:02:35.730 ForwardAgent yes 00:02:35.730 ForwardX11 yes 00:02:35.730 00:02:35.745 [Pipeline] withEnv 00:02:35.748 [Pipeline] { 00:02:35.763 [Pipeline] sh 00:02:36.046 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:36.046 source /etc/os-release 00:02:36.046 [[ -e /image.version ]] && img=$(< /image.version) 00:02:36.046 # Minimal, systemd-like check. 00:02:36.046 if [[ -e /.dockerenv ]]; then 00:02:36.046 # Clear garbage from the node's name: 00:02:36.046 # agt-er_autotest_547-896 -> autotest_547-896 00:02:36.046 # $HOSTNAME is the actual container id 00:02:36.046 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:36.046 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:36.046 # We can assume this is a mount from a host where container is running, 00:02:36.046 # so fetch its hostname to easily identify the target swarm worker. 00:02:36.046 container="$(< /etc/hostname) ($agent)" 00:02:36.046 else 00:02:36.046 # Fallback 00:02:36.046 container=$agent 00:02:36.046 fi 00:02:36.046 fi 00:02:36.046 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:36.046 00:02:36.318 [Pipeline] } 00:02:36.337 [Pipeline] // withEnv 00:02:36.347 [Pipeline] setCustomBuildProperty 00:02:36.364 [Pipeline] stage 00:02:36.366 [Pipeline] { (Tests) 00:02:36.386 [Pipeline] sh 00:02:36.671 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:36.946 [Pipeline] sh 00:02:37.230 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:37.507 [Pipeline] timeout 00:02:37.507 Timeout set to expire in 1 hr 30 min 00:02:37.509 [Pipeline] { 00:02:37.523 [Pipeline] sh 00:02:37.806 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:38.379 HEAD is now at 597702889 lib/ftl: Add explicit support for write unit sizes of base device 00:02:38.390 [Pipeline] sh 00:02:38.669 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:38.940 [Pipeline] sh 00:02:39.220 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:39.493 [Pipeline] sh 00:02:39.775 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:40.035 ++ readlink -f spdk_repo 00:02:40.035 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:40.035 + [[ -n /home/vagrant/spdk_repo ]] 00:02:40.035 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:40.035 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:40.035 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:40.035 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:40.035 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:40.035 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:40.035 + cd /home/vagrant/spdk_repo 00:02:40.035 + source /etc/os-release 00:02:40.035 ++ NAME='Fedora Linux' 00:02:40.035 ++ VERSION='39 (Cloud Edition)' 00:02:40.035 ++ ID=fedora 00:02:40.035 ++ VERSION_ID=39 00:02:40.035 ++ VERSION_CODENAME= 00:02:40.035 ++ PLATFORM_ID=platform:f39 00:02:40.035 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:40.035 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:40.035 ++ LOGO=fedora-logo-icon 00:02:40.035 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:40.035 ++ HOME_URL=https://fedoraproject.org/ 00:02:40.035 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:40.035 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:40.035 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:40.035 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:40.035 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:40.035 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:40.035 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:40.035 ++ SUPPORT_END=2024-11-12 00:02:40.035 ++ VARIANT='Cloud Edition' 00:02:40.035 ++ VARIANT_ID=cloud 00:02:40.036 + uname -a 00:02:40.036 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:40.036 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:40.603 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:40.603 Hugepages 00:02:40.603 node hugesize free / total 00:02:40.603 node0 1048576kB 0 / 0 00:02:40.603 node0 2048kB 0 / 0 00:02:40.603 00:02:40.603 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:40.603 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:40.603 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:40.603 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:40.603 + rm -f /tmp/spdk-ld-path 00:02:40.603 + source autorun-spdk.conf 00:02:40.603 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.603 ++ SPDK_RUN_ASAN=1 00:02:40.603 ++ SPDK_RUN_UBSAN=1 00:02:40.603 ++ SPDK_TEST_RAID=1 00:02:40.603 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.603 ++ RUN_NIGHTLY=0 00:02:40.603 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:40.603 + [[ -n '' ]] 00:02:40.603 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:40.603 + for M in /var/spdk/build-*-manifest.txt 00:02:40.603 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:40.603 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.603 + for M in /var/spdk/build-*-manifest.txt 00:02:40.603 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:40.603 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.603 + for M in /var/spdk/build-*-manifest.txt 00:02:40.603 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:40.603 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:40.603 ++ uname 00:02:40.603 + [[ Linux == \L\i\n\u\x ]] 00:02:40.603 + sudo dmesg -T 00:02:40.603 + sudo dmesg --clear 00:02:40.603 + sudo dmesg -Tw 00:02:40.603 + dmesg_pid=5439 00:02:40.603 + [[ Fedora Linux == FreeBSD ]] 00:02:40.603 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.603 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:40.603 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:40.603 + [[ -x /usr/src/fio-static/fio ]] 00:02:40.603 + export FIO_BIN=/usr/src/fio-static/fio 00:02:40.603 + FIO_BIN=/usr/src/fio-static/fio 00:02:40.603 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:40.603 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:40.603 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:40.603 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.603 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:40.603 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:40.603 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.603 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:40.603 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.862 09:39:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:40.862 09:39:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.862 09:39:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:40.862 09:39:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:40.862 09:39:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:40.862 09:39:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:40.862 09:39:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:40.862 09:39:41 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:40.862 09:39:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:40.862 09:39:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:40.862 09:39:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:40.862 09:39:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:40.862 09:39:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:40.862 09:39:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:40.862 09:39:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:40.862 09:39:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:40.862 09:39:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.862 09:39:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.862 09:39:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.862 09:39:41 -- paths/export.sh@5 -- $ export PATH 00:02:40.862 09:39:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:40.862 09:39:41 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:40.862 09:39:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:40.862 09:39:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732700381.XXXXXX 00:02:40.862 09:39:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732700381.gaJOjS 00:02:40.862 09:39:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:40.862 09:39:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:40.862 09:39:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:40.862 09:39:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:40.862 09:39:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:40.863 09:39:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:40.863 09:39:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:40.863 09:39:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.863 09:39:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:40.863 09:39:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:40.863 09:39:41 -- pm/common@17 -- $ local monitor 00:02:40.863 09:39:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.863 09:39:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:40.863 09:39:41 -- pm/common@21 -- $ date +%s 00:02:40.863 09:39:41 -- pm/common@25 -- $ sleep 1 00:02:40.863 09:39:41 -- pm/common@21 -- $ date +%s 00:02:40.863 09:39:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732700381 00:02:40.863 09:39:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732700381 00:02:41.121 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732700381_collect-vmstat.pm.log 00:02:41.121 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732700381_collect-cpu-load.pm.log 00:02:42.057 09:39:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:42.057 09:39:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:42.057 09:39:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:42.057 09:39:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:42.057 09:39:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:42.057 Wed Nov 27 09:39:42 AM UTC 2024 00:02:42.057 09:39:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:42.057 v25.01-pre-272-g597702889 00:02:42.057 09:39:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:42.057 09:39:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:42.057 09:39:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:42.057 09:39:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:42.057 09:39:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.057 ************************************ 00:02:42.057 START TEST asan 00:02:42.057 ************************************ 00:02:42.057 using asan 00:02:42.057 09:39:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:42.057 00:02:42.057 real 0m0.000s 00:02:42.057 user 0m0.000s 00:02:42.057 sys 0m0.000s 00:02:42.057 09:39:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:42.057 09:39:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:42.057 ************************************ 00:02:42.057 END TEST asan 00:02:42.057 ************************************ 00:02:42.057 09:39:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:42.057 09:39:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:42.057 09:39:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:42.057 09:39:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:42.057 09:39:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.057 ************************************ 00:02:42.057 START TEST ubsan 00:02:42.057 ************************************ 00:02:42.057 using ubsan 00:02:42.057 09:39:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:42.057 00:02:42.057 real 0m0.000s 00:02:42.057 user 0m0.000s 00:02:42.057 sys 0m0.000s 00:02:42.057 09:39:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:42.057 09:39:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:42.057 ************************************ 00:02:42.057 END TEST ubsan 00:02:42.057 ************************************ 00:02:42.057 09:39:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:42.057 09:39:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:42.057 09:39:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:42.057 09:39:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:42.057 09:39:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:42.057 09:39:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:42.057 09:39:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:42.057 09:39:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:42.057 09:39:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:42.315 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:42.315 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:42.881 Using 'verbs' RDMA provider 00:03:01.923 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:16.809 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:16.809 Creating mk/config.mk...done. 00:03:16.809 Creating mk/cc.flags.mk...done. 00:03:16.809 Type 'make' to build. 00:03:16.809 09:40:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:16.809 09:40:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:16.809 09:40:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:16.809 09:40:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:16.809 ************************************ 00:03:16.809 START TEST make 00:03:16.809 ************************************ 00:03:16.809 09:40:16 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:16.809 make[1]: Nothing to be done for 'all'. 00:03:16.809 help2man: can't get `--help' info from ./programs/igzip 00:03:16.809 Try `--no-discard-stderr' if option outputs to stderr 00:03:16.810 make[3]: [Makefile:4944: programs/igzip.1] Error 127 (ignored) 00:03:26.799 The Meson build system 00:03:26.799 Version: 1.5.0 00:03:26.799 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:26.799 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:26.799 Build type: native build 00:03:26.799 Program cat found: YES (/usr/bin/cat) 00:03:26.799 Project name: DPDK 00:03:26.799 Project version: 24.03.0 00:03:26.799 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:26.799 C linker for the host machine: cc ld.bfd 2.40-14 00:03:26.799 Host machine cpu family: x86_64 00:03:26.799 Host machine cpu: x86_64 00:03:26.799 Message: ## Building in Developer Mode ## 00:03:26.799 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:26.799 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:26.799 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:26.799 Program python3 found: YES (/usr/bin/python3) 00:03:26.799 Program cat found: YES (/usr/bin/cat) 00:03:26.799 Compiler for C supports arguments -march=native: YES 00:03:26.799 Checking for size of "void *" : 8 00:03:26.799 Checking for size of "void *" : 8 (cached) 00:03:26.799 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:26.799 Library m found: YES 00:03:26.799 Library numa found: YES 00:03:26.799 Has header "numaif.h" : YES 00:03:26.799 Library fdt found: NO 00:03:26.799 Library execinfo found: NO 00:03:26.799 Has header "execinfo.h" : YES 00:03:26.799 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:26.799 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:26.799 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:26.799 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:26.799 Run-time dependency openssl found: YES 3.1.1 00:03:26.799 Run-time dependency libpcap found: YES 1.10.4 00:03:26.799 Has header "pcap.h" with dependency libpcap: YES 00:03:26.799 Compiler for C supports arguments -Wcast-qual: YES 00:03:26.799 Compiler for C supports arguments -Wdeprecated: YES 00:03:26.799 Compiler for C supports arguments -Wformat: YES 00:03:26.799 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:26.799 Compiler for C supports arguments -Wformat-security: NO 00:03:26.799 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.799 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:26.799 Compiler for C supports arguments -Wnested-externs: YES 00:03:26.799 Compiler for C supports arguments -Wold-style-definition: YES 00:03:26.799 Compiler for C supports arguments -Wpointer-arith: YES 00:03:26.799 Compiler for C supports arguments -Wsign-compare: YES 00:03:26.799 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:26.799 Compiler for C supports arguments -Wundef: YES 00:03:26.799 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.799 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:26.799 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:26.799 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.799 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:26.799 Program objdump found: YES (/usr/bin/objdump) 00:03:26.799 Compiler for C supports arguments -mavx512f: YES 00:03:26.799 Checking if "AVX512 checking" compiles: YES 00:03:26.799 Fetching value of define "__SSE4_2__" : 1 00:03:26.799 Fetching value of define "__AES__" : 1 00:03:26.799 Fetching value of define "__AVX__" : 1 00:03:26.799 Fetching value of define "__AVX2__" : 1 00:03:26.799 Fetching value of define "__AVX512BW__" : 1 00:03:26.799 Fetching value of define "__AVX512CD__" : 1 00:03:26.799 Fetching value of define "__AVX512DQ__" : 1 00:03:26.799 Fetching value of define "__AVX512F__" : 1 00:03:26.799 Fetching value of define "__AVX512VL__" : 1 00:03:26.799 Fetching value of define "__PCLMUL__" : 1 00:03:26.799 Fetching value of define "__RDRND__" : 1 00:03:26.799 Fetching value of define "__RDSEED__" : 1 00:03:26.799 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:26.799 Fetching value of define "__znver1__" : (undefined) 00:03:26.799 Fetching value of define "__znver2__" : (undefined) 00:03:26.799 Fetching value of define "__znver3__" : (undefined) 00:03:26.799 Fetching value of define "__znver4__" : (undefined) 00:03:26.799 Library asan found: YES 00:03:26.799 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:26.799 Message: lib/log: Defining dependency "log" 00:03:26.799 Message: lib/kvargs: Defining dependency "kvargs" 00:03:26.799 Message: lib/telemetry: Defining dependency "telemetry" 00:03:26.799 Library rt found: YES 00:03:26.799 Checking for function "getentropy" : NO 00:03:26.799 Message: lib/eal: Defining dependency "eal" 00:03:26.799 Message: lib/ring: Defining dependency "ring" 00:03:26.799 Message: lib/rcu: Defining dependency "rcu" 00:03:26.799 Message: lib/mempool: Defining dependency "mempool" 00:03:26.799 Message: lib/mbuf: Defining dependency "mbuf" 00:03:26.799 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:26.799 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:26.799 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:26.799 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:26.799 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:26.799 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:26.799 Compiler for C supports arguments -mpclmul: YES 00:03:26.799 Compiler for C supports arguments -maes: YES 00:03:26.799 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:26.799 Compiler for C supports arguments -mavx512bw: YES 00:03:26.799 Compiler for C supports arguments -mavx512dq: YES 00:03:26.799 Compiler for C supports arguments -mavx512vl: YES 00:03:26.799 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:26.799 Compiler for C supports arguments -mavx2: YES 00:03:26.799 Compiler for C supports arguments -mavx: YES 00:03:26.799 Message: lib/net: Defining dependency "net" 00:03:26.799 Message: lib/meter: Defining dependency "meter" 00:03:26.799 Message: lib/ethdev: Defining dependency "ethdev" 00:03:26.799 Message: lib/pci: Defining dependency "pci" 00:03:26.799 Message: lib/cmdline: Defining dependency "cmdline" 00:03:26.799 Message: lib/hash: Defining dependency "hash" 00:03:26.799 Message: lib/timer: Defining dependency "timer" 00:03:26.799 Message: lib/compressdev: Defining dependency "compressdev" 00:03:26.799 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:26.799 Message: lib/dmadev: Defining dependency "dmadev" 00:03:26.799 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:26.799 Message: lib/power: Defining dependency "power" 00:03:26.799 Message: lib/reorder: Defining dependency "reorder" 00:03:26.799 Message: lib/security: Defining dependency "security" 00:03:26.799 Has header "linux/userfaultfd.h" : YES 00:03:26.799 Has header "linux/vduse.h" : YES 00:03:26.799 Message: lib/vhost: Defining dependency "vhost" 00:03:26.799 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:26.799 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:26.799 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:26.799 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:26.799 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:26.799 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:26.799 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:26.799 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:26.799 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:26.799 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:26.799 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:26.799 Configuring doxy-api-html.conf using configuration 00:03:26.799 Configuring doxy-api-man.conf using configuration 00:03:26.799 Program mandb found: YES (/usr/bin/mandb) 00:03:26.799 Program sphinx-build found: NO 00:03:26.799 Configuring rte_build_config.h using configuration 00:03:26.799 Message: 00:03:26.799 ================= 00:03:26.799 Applications Enabled 00:03:26.799 ================= 00:03:26.799 00:03:26.799 apps: 00:03:26.799 00:03:26.799 00:03:26.799 Message: 00:03:26.799 ================= 00:03:26.799 Libraries Enabled 00:03:26.799 ================= 00:03:26.799 00:03:26.799 libs: 00:03:26.800 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:26.800 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:26.800 cryptodev, dmadev, power, reorder, security, vhost, 00:03:26.800 00:03:26.800 Message: 00:03:26.800 =============== 00:03:26.800 Drivers Enabled 00:03:26.800 =============== 00:03:26.800 00:03:26.800 common: 00:03:26.800 00:03:26.800 bus: 00:03:26.800 pci, vdev, 00:03:26.800 mempool: 00:03:26.800 ring, 00:03:26.800 dma: 00:03:26.800 00:03:26.800 net: 00:03:26.800 00:03:26.800 crypto: 00:03:26.800 00:03:26.800 compress: 00:03:26.800 00:03:26.800 vdpa: 00:03:26.800 00:03:26.800 00:03:26.800 Message: 00:03:26.800 ================= 00:03:26.800 Content Skipped 00:03:26.800 ================= 00:03:26.800 00:03:26.800 apps: 00:03:26.800 dumpcap: explicitly disabled via build config 00:03:26.800 graph: explicitly disabled via build config 00:03:26.800 pdump: explicitly disabled via build config 00:03:26.800 proc-info: explicitly disabled via build config 00:03:26.800 test-acl: explicitly disabled via build config 00:03:26.800 test-bbdev: explicitly disabled via build config 00:03:26.800 test-cmdline: explicitly disabled via build config 00:03:26.800 test-compress-perf: explicitly disabled via build config 00:03:26.800 test-crypto-perf: explicitly disabled via build config 00:03:26.800 test-dma-perf: explicitly disabled via build config 00:03:26.800 test-eventdev: explicitly disabled via build config 00:03:26.800 test-fib: explicitly disabled via build config 00:03:26.800 test-flow-perf: explicitly disabled via build config 00:03:26.800 test-gpudev: explicitly disabled via build config 00:03:26.800 test-mldev: explicitly disabled via build config 00:03:26.800 test-pipeline: explicitly disabled via build config 00:03:26.800 test-pmd: explicitly disabled via build config 00:03:26.800 test-regex: explicitly disabled via build config 00:03:26.800 test-sad: explicitly disabled via build config 00:03:26.800 test-security-perf: explicitly disabled via build config 00:03:26.800 00:03:26.800 libs: 00:03:26.800 argparse: explicitly disabled via build config 00:03:26.800 metrics: explicitly disabled via build config 00:03:26.800 acl: explicitly disabled via build config 00:03:26.800 bbdev: explicitly disabled via build config 00:03:26.800 bitratestats: explicitly disabled via build config 00:03:26.800 bpf: explicitly disabled via build config 00:03:26.800 cfgfile: explicitly disabled via build config 00:03:26.800 distributor: explicitly disabled via build config 00:03:26.800 efd: explicitly disabled via build config 00:03:26.800 eventdev: explicitly disabled via build config 00:03:26.800 dispatcher: explicitly disabled via build config 00:03:26.800 gpudev: explicitly disabled via build config 00:03:26.800 gro: explicitly disabled via build config 00:03:26.800 gso: explicitly disabled via build config 00:03:26.800 ip_frag: explicitly disabled via build config 00:03:26.800 jobstats: explicitly disabled via build config 00:03:26.800 latencystats: explicitly disabled via build config 00:03:26.800 lpm: explicitly disabled via build config 00:03:26.800 member: explicitly disabled via build config 00:03:26.800 pcapng: explicitly disabled via build config 00:03:26.800 rawdev: explicitly disabled via build config 00:03:26.800 regexdev: explicitly disabled via build config 00:03:26.800 mldev: explicitly disabled via build config 00:03:26.800 rib: explicitly disabled via build config 00:03:26.800 sched: explicitly disabled via build config 00:03:26.800 stack: explicitly disabled via build config 00:03:26.800 ipsec: explicitly disabled via build config 00:03:26.800 pdcp: explicitly disabled via build config 00:03:26.800 fib: explicitly disabled via build config 00:03:26.800 port: explicitly disabled via build config 00:03:26.800 pdump: explicitly disabled via build config 00:03:26.800 table: explicitly disabled via build config 00:03:26.800 pipeline: explicitly disabled via build config 00:03:26.800 graph: explicitly disabled via build config 00:03:26.800 node: explicitly disabled via build config 00:03:26.800 00:03:26.800 drivers: 00:03:26.800 common/cpt: not in enabled drivers build config 00:03:26.800 common/dpaax: not in enabled drivers build config 00:03:26.800 common/iavf: not in enabled drivers build config 00:03:26.800 common/idpf: not in enabled drivers build config 00:03:26.800 common/ionic: not in enabled drivers build config 00:03:26.800 common/mvep: not in enabled drivers build config 00:03:26.800 common/octeontx: not in enabled drivers build config 00:03:26.800 bus/auxiliary: not in enabled drivers build config 00:03:26.800 bus/cdx: not in enabled drivers build config 00:03:26.800 bus/dpaa: not in enabled drivers build config 00:03:26.800 bus/fslmc: not in enabled drivers build config 00:03:26.800 bus/ifpga: not in enabled drivers build config 00:03:26.800 bus/platform: not in enabled drivers build config 00:03:26.800 bus/uacce: not in enabled drivers build config 00:03:26.800 bus/vmbus: not in enabled drivers build config 00:03:26.800 common/cnxk: not in enabled drivers build config 00:03:26.800 common/mlx5: not in enabled drivers build config 00:03:26.800 common/nfp: not in enabled drivers build config 00:03:26.800 common/nitrox: not in enabled drivers build config 00:03:26.800 common/qat: not in enabled drivers build config 00:03:26.800 common/sfc_efx: not in enabled drivers build config 00:03:26.800 mempool/bucket: not in enabled drivers build config 00:03:26.800 mempool/cnxk: not in enabled drivers build config 00:03:26.800 mempool/dpaa: not in enabled drivers build config 00:03:26.800 mempool/dpaa2: not in enabled drivers build config 00:03:26.800 mempool/octeontx: not in enabled drivers build config 00:03:26.800 mempool/stack: not in enabled drivers build config 00:03:26.800 dma/cnxk: not in enabled drivers build config 00:03:26.800 dma/dpaa: not in enabled drivers build config 00:03:26.800 dma/dpaa2: not in enabled drivers build config 00:03:26.800 dma/hisilicon: not in enabled drivers build config 00:03:26.800 dma/idxd: not in enabled drivers build config 00:03:26.800 dma/ioat: not in enabled drivers build config 00:03:26.800 dma/skeleton: not in enabled drivers build config 00:03:26.800 net/af_packet: not in enabled drivers build config 00:03:26.800 net/af_xdp: not in enabled drivers build config 00:03:26.800 net/ark: not in enabled drivers build config 00:03:26.800 net/atlantic: not in enabled drivers build config 00:03:26.800 net/avp: not in enabled drivers build config 00:03:26.800 net/axgbe: not in enabled drivers build config 00:03:26.800 net/bnx2x: not in enabled drivers build config 00:03:26.800 net/bnxt: not in enabled drivers build config 00:03:26.800 net/bonding: not in enabled drivers build config 00:03:26.800 net/cnxk: not in enabled drivers build config 00:03:26.800 net/cpfl: not in enabled drivers build config 00:03:26.800 net/cxgbe: not in enabled drivers build config 00:03:26.800 net/dpaa: not in enabled drivers build config 00:03:26.800 net/dpaa2: not in enabled drivers build config 00:03:26.800 net/e1000: not in enabled drivers build config 00:03:26.800 net/ena: not in enabled drivers build config 00:03:26.800 net/enetc: not in enabled drivers build config 00:03:26.800 net/enetfec: not in enabled drivers build config 00:03:26.800 net/enic: not in enabled drivers build config 00:03:26.800 net/failsafe: not in enabled drivers build config 00:03:26.800 net/fm10k: not in enabled drivers build config 00:03:26.800 net/gve: not in enabled drivers build config 00:03:26.800 net/hinic: not in enabled drivers build config 00:03:26.800 net/hns3: not in enabled drivers build config 00:03:26.800 net/i40e: not in enabled drivers build config 00:03:26.800 net/iavf: not in enabled drivers build config 00:03:26.800 net/ice: not in enabled drivers build config 00:03:26.800 net/idpf: not in enabled drivers build config 00:03:26.800 net/igc: not in enabled drivers build config 00:03:26.800 net/ionic: not in enabled drivers build config 00:03:26.800 net/ipn3ke: not in enabled drivers build config 00:03:26.800 net/ixgbe: not in enabled drivers build config 00:03:26.800 net/mana: not in enabled drivers build config 00:03:26.800 net/memif: not in enabled drivers build config 00:03:26.800 net/mlx4: not in enabled drivers build config 00:03:26.800 net/mlx5: not in enabled drivers build config 00:03:26.800 net/mvneta: not in enabled drivers build config 00:03:26.800 net/mvpp2: not in enabled drivers build config 00:03:26.800 net/netvsc: not in enabled drivers build config 00:03:26.800 net/nfb: not in enabled drivers build config 00:03:26.800 net/nfp: not in enabled drivers build config 00:03:26.800 net/ngbe: not in enabled drivers build config 00:03:26.800 net/null: not in enabled drivers build config 00:03:26.800 net/octeontx: not in enabled drivers build config 00:03:26.800 net/octeon_ep: not in enabled drivers build config 00:03:26.800 net/pcap: not in enabled drivers build config 00:03:26.800 net/pfe: not in enabled drivers build config 00:03:26.800 net/qede: not in enabled drivers build config 00:03:26.800 net/ring: not in enabled drivers build config 00:03:26.800 net/sfc: not in enabled drivers build config 00:03:26.800 net/softnic: not in enabled drivers build config 00:03:26.800 net/tap: not in enabled drivers build config 00:03:26.800 net/thunderx: not in enabled drivers build config 00:03:26.800 net/txgbe: not in enabled drivers build config 00:03:26.800 net/vdev_netvsc: not in enabled drivers build config 00:03:26.800 net/vhost: not in enabled drivers build config 00:03:26.800 net/virtio: not in enabled drivers build config 00:03:26.800 net/vmxnet3: not in enabled drivers build config 00:03:26.800 raw/*: missing internal dependency, "rawdev" 00:03:26.800 crypto/armv8: not in enabled drivers build config 00:03:26.801 crypto/bcmfs: not in enabled drivers build config 00:03:26.801 crypto/caam_jr: not in enabled drivers build config 00:03:26.801 crypto/ccp: not in enabled drivers build config 00:03:26.801 crypto/cnxk: not in enabled drivers build config 00:03:26.801 crypto/dpaa_sec: not in enabled drivers build config 00:03:26.801 crypto/dpaa2_sec: not in enabled drivers build config 00:03:26.801 crypto/ipsec_mb: not in enabled drivers build config 00:03:26.801 crypto/mlx5: not in enabled drivers build config 00:03:26.801 crypto/mvsam: not in enabled drivers build config 00:03:26.801 crypto/nitrox: not in enabled drivers build config 00:03:26.801 crypto/null: not in enabled drivers build config 00:03:26.801 crypto/octeontx: not in enabled drivers build config 00:03:26.801 crypto/openssl: not in enabled drivers build config 00:03:26.801 crypto/scheduler: not in enabled drivers build config 00:03:26.801 crypto/uadk: not in enabled drivers build config 00:03:26.801 crypto/virtio: not in enabled drivers build config 00:03:26.801 compress/isal: not in enabled drivers build config 00:03:26.801 compress/mlx5: not in enabled drivers build config 00:03:26.801 compress/nitrox: not in enabled drivers build config 00:03:26.801 compress/octeontx: not in enabled drivers build config 00:03:26.801 compress/zlib: not in enabled drivers build config 00:03:26.801 regex/*: missing internal dependency, "regexdev" 00:03:26.801 ml/*: missing internal dependency, "mldev" 00:03:26.801 vdpa/ifc: not in enabled drivers build config 00:03:26.801 vdpa/mlx5: not in enabled drivers build config 00:03:26.801 vdpa/nfp: not in enabled drivers build config 00:03:26.801 vdpa/sfc: not in enabled drivers build config 00:03:26.801 event/*: missing internal dependency, "eventdev" 00:03:26.801 baseband/*: missing internal dependency, "bbdev" 00:03:26.801 gpu/*: missing internal dependency, "gpudev" 00:03:26.801 00:03:26.801 00:03:27.062 Build targets in project: 85 00:03:27.062 00:03:27.062 DPDK 24.03.0 00:03:27.062 00:03:27.062 User defined options 00:03:27.062 buildtype : debug 00:03:27.062 default_library : shared 00:03:27.062 libdir : lib 00:03:27.062 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:27.062 b_sanitize : address 00:03:27.062 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:27.062 c_link_args : 00:03:27.062 cpu_instruction_set: native 00:03:27.062 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:27.062 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:27.062 enable_docs : false 00:03:27.062 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:27.062 enable_kmods : false 00:03:27.062 max_lcores : 128 00:03:27.062 tests : false 00:03:27.062 00:03:27.062 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:27.321 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:27.581 [1/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:27.581 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:27.581 [3/268] Linking static target lib/librte_log.a 00:03:27.581 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:27.581 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:27.581 [6/268] Linking static target lib/librte_kvargs.a 00:03:27.841 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:27.841 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:27.841 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:28.101 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:28.101 [11/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.101 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:28.101 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:28.101 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:28.101 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:28.101 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:28.101 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:28.360 [18/268] Linking static target lib/librte_telemetry.a 00:03:28.360 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.619 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:28.619 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:28.619 [22/268] Linking target lib/librte_log.so.24.1 00:03:28.619 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:28.620 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:28.620 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:28.879 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:28.879 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:28.879 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:28.879 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:28.879 [30/268] Linking target lib/librte_kvargs.so.24.1 00:03:28.879 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:28.879 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:29.138 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.139 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:29.139 [35/268] Linking target lib/librte_telemetry.so.24.1 00:03:29.139 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:29.139 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:29.139 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:29.397 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:29.397 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:29.397 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:29.397 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:29.397 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:29.397 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:29.657 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:29.657 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:29.916 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:29.916 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:29.916 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:29.916 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:29.916 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:29.916 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:30.176 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:30.176 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:30.176 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:30.435 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:30.435 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:30.435 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:30.435 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:30.694 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:30.694 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:30.694 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:30.694 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:30.694 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:30.694 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:30.694 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:30.953 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:30.953 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:30.953 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:31.212 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:31.212 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:31.212 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:31.212 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:31.212 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:31.212 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:31.212 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:31.212 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:31.471 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:31.471 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:31.471 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:31.471 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:31.729 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:31.729 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:31.729 [84/268] Linking static target lib/librte_ring.a 00:03:31.729 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:31.729 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:32.016 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:32.016 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:32.016 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:32.016 [90/268] Linking static target lib/librte_eal.a 00:03:32.016 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:32.016 [92/268] Linking static target lib/librte_mempool.a 00:03:32.016 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:32.275 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:32.275 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:32.275 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.275 [97/268] Linking static target lib/librte_rcu.a 00:03:32.275 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:32.533 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:32.533 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:32.533 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:32.792 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:32.792 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:32.792 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:32.792 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:32.792 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:32.792 [107/268] Linking static target lib/librte_mbuf.a 00:03:32.792 [108/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.792 [109/268] Linking static target lib/librte_net.a 00:03:32.792 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:32.792 [111/268] Linking static target lib/librte_meter.a 00:03:33.359 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.359 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:33.359 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:33.359 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:33.359 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.359 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.359 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:33.927 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.927 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:33.927 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:33.927 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:34.186 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:34.186 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:34.186 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:34.186 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:34.186 [127/268] Linking static target lib/librte_pci.a 00:03:34.186 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:34.445 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:34.445 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:34.445 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:34.445 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:34.703 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:34.703 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:34.703 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:34.703 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:34.703 [137/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.703 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:34.703 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:34.704 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:34.704 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:34.704 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:34.962 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:34.962 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:34.962 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:34.962 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:34.962 [147/268] Linking static target lib/librte_cmdline.a 00:03:35.221 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:35.221 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:35.221 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:35.480 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:35.480 [152/268] Linking static target lib/librte_timer.a 00:03:35.480 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:35.480 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:35.739 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:35.739 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:35.996 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:35.996 [158/268] Linking static target lib/librte_compressdev.a 00:03:35.996 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:35.996 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:35.996 [161/268] Linking static target lib/librte_ethdev.a 00:03:35.996 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.254 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:36.254 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:36.254 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:36.254 [166/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:36.511 [167/268] Linking static target lib/librte_dmadev.a 00:03:36.511 [168/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.511 [169/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:36.769 [170/268] Linking static target lib/librte_hash.a 00:03:36.769 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:36.769 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:36.769 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:37.029 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:37.029 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.287 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:37.287 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:37.287 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:37.287 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:37.287 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.287 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:37.546 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:37.546 [183/268] Linking static target lib/librte_cryptodev.a 00:03:37.546 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:37.546 [185/268] Linking static target lib/librte_power.a 00:03:37.803 [186/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.061 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:38.061 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:38.061 [189/268] Linking static target lib/librte_reorder.a 00:03:38.061 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:38.061 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:38.061 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:38.061 [193/268] Linking static target lib/librte_security.a 00:03:38.625 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.883 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.141 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.141 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:39.141 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:39.141 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:39.399 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:39.656 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:39.656 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:39.985 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:39.985 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:39.985 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:39.985 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:40.244 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.244 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:40.244 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:40.244 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:40.244 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:40.504 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:40.504 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:40.504 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:40.504 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:40.504 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:40.504 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:40.504 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:40.504 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:40.504 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:40.504 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:40.762 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:40.762 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.762 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:40.762 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:40.762 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.328 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.702 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:43.636 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.893 [230/268] Linking target lib/librte_eal.so.24.1 00:03:43.893 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:44.152 [232/268] Linking target lib/librte_pci.so.24.1 00:03:44.152 [233/268] Linking target lib/librte_meter.so.24.1 00:03:44.152 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:44.152 [235/268] Linking target lib/librte_ring.so.24.1 00:03:44.152 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:44.152 [237/268] Linking target lib/librte_timer.so.24.1 00:03:44.152 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:44.152 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:44.152 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:44.152 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:44.152 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:44.152 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:44.152 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:44.152 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:44.411 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:44.411 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:44.411 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:44.411 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:44.670 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:44.670 [251/268] Linking target lib/librte_reorder.so.24.1 00:03:44.670 [252/268] Linking target lib/librte_net.so.24.1 00:03:44.670 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:44.670 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:44.929 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:44.929 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:44.929 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:44.929 [258/268] Linking target lib/librte_hash.so.24.1 00:03:44.929 [259/268] Linking target lib/librte_security.so.24.1 00:03:44.929 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:45.187 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.445 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:45.445 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:45.445 [264/268] Linking target lib/librte_power.so.24.1 00:03:47.975 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:47.975 [266/268] Linking static target lib/librte_vhost.a 00:03:49.880 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.139 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:50.139 INFO: autodetecting backend as ninja 00:03:50.139 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:12.081 CC lib/ut/ut.o 00:04:12.081 CC lib/ut_mock/mock.o 00:04:12.081 CC lib/log/log_deprecated.o 00:04:12.081 CC lib/log/log_flags.o 00:04:12.081 CC lib/log/log.o 00:04:12.338 LIB libspdk_ut.a 00:04:12.338 LIB libspdk_ut_mock.a 00:04:12.338 LIB libspdk_log.a 00:04:12.338 SO libspdk_ut_mock.so.6.0 00:04:12.338 SO libspdk_ut.so.2.0 00:04:12.338 SO libspdk_log.so.7.1 00:04:12.338 SYMLINK libspdk_ut.so 00:04:12.338 SYMLINK libspdk_ut_mock.so 00:04:12.596 SYMLINK libspdk_log.so 00:04:12.853 CC lib/util/crc16.o 00:04:12.853 CC lib/util/crc32c.o 00:04:12.853 CC lib/util/base64.o 00:04:12.853 CC lib/util/crc32.o 00:04:12.853 CC lib/util/bit_array.o 00:04:12.853 CC lib/util/cpuset.o 00:04:12.853 CC lib/ioat/ioat.o 00:04:12.853 CC lib/dma/dma.o 00:04:12.853 CXX lib/trace_parser/trace.o 00:04:12.853 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.853 CC lib/util/crc32_ieee.o 00:04:12.853 CC lib/vfio_user/host/vfio_user.o 00:04:12.853 CC lib/util/crc64.o 00:04:12.853 CC lib/util/dif.o 00:04:13.117 CC lib/util/fd.o 00:04:13.117 LIB libspdk_dma.a 00:04:13.117 CC lib/util/fd_group.o 00:04:13.117 SO libspdk_dma.so.5.0 00:04:13.117 CC lib/util/file.o 00:04:13.117 CC lib/util/hexlify.o 00:04:13.117 SYMLINK libspdk_dma.so 00:04:13.117 LIB libspdk_ioat.a 00:04:13.117 CC lib/util/iov.o 00:04:13.117 SO libspdk_ioat.so.7.0 00:04:13.117 CC lib/util/math.o 00:04:13.117 CC lib/util/net.o 00:04:13.117 SYMLINK libspdk_ioat.so 00:04:13.117 CC lib/util/pipe.o 00:04:13.117 LIB libspdk_vfio_user.a 00:04:13.117 CC lib/util/strerror_tls.o 00:04:13.117 CC lib/util/string.o 00:04:13.376 SO libspdk_vfio_user.so.5.0 00:04:13.376 CC lib/util/uuid.o 00:04:13.376 SYMLINK libspdk_vfio_user.so 00:04:13.376 CC lib/util/xor.o 00:04:13.376 CC lib/util/zipf.o 00:04:13.376 CC lib/util/md5.o 00:04:13.942 LIB libspdk_util.a 00:04:13.942 SO libspdk_util.so.10.1 00:04:13.942 LIB libspdk_trace_parser.a 00:04:13.942 SO libspdk_trace_parser.so.6.0 00:04:14.201 SYMLINK libspdk_util.so 00:04:14.201 SYMLINK libspdk_trace_parser.so 00:04:14.201 CC lib/env_dpdk/memory.o 00:04:14.201 CC lib/env_dpdk/env.o 00:04:14.201 CC lib/env_dpdk/init.o 00:04:14.201 CC lib/env_dpdk/pci.o 00:04:14.201 CC lib/env_dpdk/threads.o 00:04:14.201 CC lib/vmd/vmd.o 00:04:14.201 CC lib/idxd/idxd.o 00:04:14.201 CC lib/json/json_parse.o 00:04:14.201 CC lib/rdma_utils/rdma_utils.o 00:04:14.201 CC lib/conf/conf.o 00:04:14.459 CC lib/env_dpdk/pci_ioat.o 00:04:14.459 CC lib/json/json_util.o 00:04:14.459 LIB libspdk_conf.a 00:04:14.716 CC lib/env_dpdk/pci_virtio.o 00:04:14.716 SO libspdk_conf.so.6.0 00:04:14.716 LIB libspdk_rdma_utils.a 00:04:14.716 SO libspdk_rdma_utils.so.1.0 00:04:14.716 SYMLINK libspdk_conf.so 00:04:14.716 CC lib/json/json_write.o 00:04:14.716 SYMLINK libspdk_rdma_utils.so 00:04:14.716 CC lib/env_dpdk/pci_vmd.o 00:04:14.716 CC lib/env_dpdk/pci_idxd.o 00:04:14.716 CC lib/env_dpdk/pci_event.o 00:04:14.974 CC lib/env_dpdk/sigbus_handler.o 00:04:14.974 CC lib/env_dpdk/pci_dpdk.o 00:04:14.974 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:14.974 CC lib/rdma_provider/common.o 00:04:14.974 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:14.974 LIB libspdk_json.a 00:04:14.974 CC lib/idxd/idxd_user.o 00:04:14.974 SO libspdk_json.so.6.0 00:04:14.974 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:15.236 CC lib/idxd/idxd_kernel.o 00:04:15.236 CC lib/vmd/led.o 00:04:15.236 SYMLINK libspdk_json.so 00:04:15.236 LIB libspdk_rdma_provider.a 00:04:15.236 SO libspdk_rdma_provider.so.7.0 00:04:15.236 LIB libspdk_vmd.a 00:04:15.236 SYMLINK libspdk_rdma_provider.so 00:04:15.236 LIB libspdk_idxd.a 00:04:15.496 SO libspdk_vmd.so.6.0 00:04:15.496 SO libspdk_idxd.so.12.1 00:04:15.496 CC lib/jsonrpc/jsonrpc_client.o 00:04:15.496 CC lib/jsonrpc/jsonrpc_server.o 00:04:15.496 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:15.496 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:15.496 SYMLINK libspdk_vmd.so 00:04:15.496 SYMLINK libspdk_idxd.so 00:04:15.754 LIB libspdk_jsonrpc.a 00:04:15.754 SO libspdk_jsonrpc.so.6.0 00:04:16.025 SYMLINK libspdk_jsonrpc.so 00:04:16.289 CC lib/rpc/rpc.o 00:04:16.289 LIB libspdk_env_dpdk.a 00:04:16.547 SO libspdk_env_dpdk.so.15.1 00:04:16.547 LIB libspdk_rpc.a 00:04:16.547 SO libspdk_rpc.so.6.0 00:04:16.547 SYMLINK libspdk_env_dpdk.so 00:04:16.805 SYMLINK libspdk_rpc.so 00:04:17.070 CC lib/trace/trace.o 00:04:17.070 CC lib/trace/trace_flags.o 00:04:17.070 CC lib/trace/trace_rpc.o 00:04:17.070 CC lib/keyring/keyring.o 00:04:17.070 CC lib/keyring/keyring_rpc.o 00:04:17.070 CC lib/notify/notify.o 00:04:17.070 CC lib/notify/notify_rpc.o 00:04:17.328 LIB libspdk_notify.a 00:04:17.328 SO libspdk_notify.so.6.0 00:04:17.328 LIB libspdk_trace.a 00:04:17.328 LIB libspdk_keyring.a 00:04:17.328 SYMLINK libspdk_notify.so 00:04:17.328 SO libspdk_keyring.so.2.0 00:04:17.586 SO libspdk_trace.so.11.0 00:04:17.586 SYMLINK libspdk_keyring.so 00:04:17.586 SYMLINK libspdk_trace.so 00:04:17.845 CC lib/sock/sock.o 00:04:17.845 CC lib/sock/sock_rpc.o 00:04:17.845 CC lib/thread/thread.o 00:04:17.845 CC lib/thread/iobuf.o 00:04:18.411 LIB libspdk_sock.a 00:04:18.411 SO libspdk_sock.so.10.0 00:04:18.411 SYMLINK libspdk_sock.so 00:04:18.981 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.981 CC lib/nvme/nvme_ns_cmd.o 00:04:18.981 CC lib/nvme/nvme_ctrlr.o 00:04:18.981 CC lib/nvme/nvme_fabric.o 00:04:18.981 CC lib/nvme/nvme_pcie.o 00:04:18.981 CC lib/nvme/nvme_ns.o 00:04:18.981 CC lib/nvme/nvme_pcie_common.o 00:04:18.981 CC lib/nvme/nvme_qpair.o 00:04:18.981 CC lib/nvme/nvme.o 00:04:19.551 CC lib/nvme/nvme_quirks.o 00:04:19.551 CC lib/nvme/nvme_transport.o 00:04:19.811 LIB libspdk_thread.a 00:04:19.811 CC lib/nvme/nvme_discovery.o 00:04:19.811 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:19.811 SO libspdk_thread.so.11.0 00:04:19.811 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:19.811 CC lib/nvme/nvme_tcp.o 00:04:19.811 SYMLINK libspdk_thread.so 00:04:20.071 CC lib/nvme/nvme_opal.o 00:04:20.071 CC lib/accel/accel.o 00:04:20.071 CC lib/nvme/nvme_io_msg.o 00:04:20.071 CC lib/nvme/nvme_poll_group.o 00:04:20.331 CC lib/nvme/nvme_zns.o 00:04:20.591 CC lib/nvme/nvme_stubs.o 00:04:20.591 CC lib/blob/blobstore.o 00:04:20.591 CC lib/blob/request.o 00:04:20.591 CC lib/blob/zeroes.o 00:04:20.851 CC lib/accel/accel_rpc.o 00:04:20.851 CC lib/accel/accel_sw.o 00:04:20.851 CC lib/nvme/nvme_auth.o 00:04:21.111 CC lib/blob/blob_bs_dev.o 00:04:21.111 CC lib/nvme/nvme_cuse.o 00:04:21.111 CC lib/nvme/nvme_rdma.o 00:04:21.371 CC lib/init/json_config.o 00:04:21.371 CC lib/init/subsystem.o 00:04:21.371 CC lib/virtio/virtio.o 00:04:21.371 LIB libspdk_accel.a 00:04:21.630 SO libspdk_accel.so.16.0 00:04:21.630 SYMLINK libspdk_accel.so 00:04:21.630 CC lib/virtio/virtio_vhost_user.o 00:04:21.630 CC lib/init/subsystem_rpc.o 00:04:21.630 CC lib/virtio/virtio_vfio_user.o 00:04:21.630 CC lib/fsdev/fsdev.o 00:04:21.908 CC lib/fsdev/fsdev_io.o 00:04:21.908 CC lib/init/rpc.o 00:04:21.908 CC lib/fsdev/fsdev_rpc.o 00:04:21.908 CC lib/virtio/virtio_pci.o 00:04:21.908 LIB libspdk_init.a 00:04:22.167 SO libspdk_init.so.6.0 00:04:22.167 SYMLINK libspdk_init.so 00:04:22.167 CC lib/bdev/bdev_zone.o 00:04:22.167 CC lib/bdev/bdev.o 00:04:22.167 CC lib/bdev/bdev_rpc.o 00:04:22.167 CC lib/bdev/part.o 00:04:22.167 CC lib/bdev/scsi_nvme.o 00:04:22.167 LIB libspdk_virtio.a 00:04:22.425 SO libspdk_virtio.so.7.0 00:04:22.425 CC lib/event/app.o 00:04:22.425 SYMLINK libspdk_virtio.so 00:04:22.425 CC lib/event/reactor.o 00:04:22.425 CC lib/event/log_rpc.o 00:04:22.425 CC lib/event/app_rpc.o 00:04:22.425 LIB libspdk_fsdev.a 00:04:22.684 CC lib/event/scheduler_static.o 00:04:22.684 SO libspdk_fsdev.so.2.0 00:04:22.684 SYMLINK libspdk_fsdev.so 00:04:22.943 LIB libspdk_nvme.a 00:04:22.943 LIB libspdk_event.a 00:04:22.943 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:23.203 SO libspdk_event.so.14.0 00:04:23.203 SO libspdk_nvme.so.15.0 00:04:23.203 SYMLINK libspdk_event.so 00:04:23.461 SYMLINK libspdk_nvme.so 00:04:23.721 LIB libspdk_fuse_dispatcher.a 00:04:23.980 SO libspdk_fuse_dispatcher.so.1.0 00:04:23.980 SYMLINK libspdk_fuse_dispatcher.so 00:04:24.919 LIB libspdk_blob.a 00:04:24.919 SO libspdk_blob.so.12.0 00:04:25.179 SYMLINK libspdk_blob.so 00:04:25.439 CC lib/blobfs/tree.o 00:04:25.439 CC lib/blobfs/blobfs.o 00:04:25.439 CC lib/lvol/lvol.o 00:04:25.698 LIB libspdk_bdev.a 00:04:25.698 SO libspdk_bdev.so.17.0 00:04:25.698 SYMLINK libspdk_bdev.so 00:04:25.958 CC lib/nbd/nbd.o 00:04:25.958 CC lib/nbd/nbd_rpc.o 00:04:25.958 CC lib/nvmf/ctrlr.o 00:04:25.958 CC lib/nvmf/ctrlr_bdev.o 00:04:25.958 CC lib/nvmf/ctrlr_discovery.o 00:04:25.958 CC lib/ftl/ftl_core.o 00:04:25.958 CC lib/scsi/dev.o 00:04:25.958 CC lib/ublk/ublk.o 00:04:26.217 CC lib/ftl/ftl_init.o 00:04:26.217 CC lib/scsi/lun.o 00:04:26.477 LIB libspdk_blobfs.a 00:04:26.477 SO libspdk_blobfs.so.11.0 00:04:26.477 CC lib/nvmf/subsystem.o 00:04:26.477 CC lib/ftl/ftl_layout.o 00:04:26.477 SYMLINK libspdk_blobfs.so 00:04:26.477 CC lib/ftl/ftl_debug.o 00:04:26.477 LIB libspdk_nbd.a 00:04:26.477 LIB libspdk_lvol.a 00:04:26.477 SO libspdk_nbd.so.7.0 00:04:26.477 SO libspdk_lvol.so.11.0 00:04:26.477 CC lib/nvmf/nvmf.o 00:04:26.735 SYMLINK libspdk_nbd.so 00:04:26.735 SYMLINK libspdk_lvol.so 00:04:26.735 CC lib/nvmf/nvmf_rpc.o 00:04:26.735 CC lib/nvmf/transport.o 00:04:26.735 CC lib/scsi/port.o 00:04:26.735 CC lib/scsi/scsi.o 00:04:26.735 CC lib/ublk/ublk_rpc.o 00:04:26.735 CC lib/scsi/scsi_bdev.o 00:04:26.735 CC lib/ftl/ftl_io.o 00:04:26.735 CC lib/ftl/ftl_sb.o 00:04:26.993 CC lib/scsi/scsi_pr.o 00:04:26.993 LIB libspdk_ublk.a 00:04:26.993 SO libspdk_ublk.so.3.0 00:04:26.993 SYMLINK libspdk_ublk.so 00:04:26.993 CC lib/ftl/ftl_l2p.o 00:04:26.993 CC lib/scsi/scsi_rpc.o 00:04:26.993 CC lib/scsi/task.o 00:04:27.252 CC lib/ftl/ftl_l2p_flat.o 00:04:27.252 CC lib/ftl/ftl_nv_cache.o 00:04:27.252 CC lib/ftl/ftl_band.o 00:04:27.252 CC lib/ftl/ftl_band_ops.o 00:04:27.252 LIB libspdk_scsi.a 00:04:27.511 CC lib/ftl/ftl_writer.o 00:04:27.511 CC lib/nvmf/tcp.o 00:04:27.511 SO libspdk_scsi.so.9.0 00:04:27.511 SYMLINK libspdk_scsi.so 00:04:27.511 CC lib/nvmf/stubs.o 00:04:27.511 CC lib/nvmf/mdns_server.o 00:04:27.511 CC lib/ftl/ftl_rq.o 00:04:27.511 CC lib/ftl/ftl_reloc.o 00:04:27.771 CC lib/ftl/ftl_l2p_cache.o 00:04:27.771 CC lib/iscsi/conn.o 00:04:27.771 CC lib/ftl/ftl_p2l.o 00:04:27.771 CC lib/ftl/ftl_p2l_log.o 00:04:28.031 CC lib/ftl/mngt/ftl_mngt.o 00:04:28.031 CC lib/iscsi/init_grp.o 00:04:28.293 CC lib/nvmf/rdma.o 00:04:28.293 CC lib/iscsi/iscsi.o 00:04:28.293 CC lib/iscsi/param.o 00:04:28.293 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:28.293 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:28.293 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:28.293 CC lib/vhost/vhost.o 00:04:28.552 CC lib/vhost/vhost_rpc.o 00:04:28.552 CC lib/vhost/vhost_scsi.o 00:04:28.552 CC lib/nvmf/auth.o 00:04:28.552 CC lib/vhost/vhost_blk.o 00:04:28.552 CC lib/vhost/rte_vhost_user.o 00:04:28.552 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:29.120 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:29.120 CC lib/iscsi/portal_grp.o 00:04:29.379 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:29.379 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:29.379 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:29.379 CC lib/iscsi/tgt_node.o 00:04:29.638 CC lib/iscsi/iscsi_subsystem.o 00:04:29.638 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:29.638 CC lib/iscsi/iscsi_rpc.o 00:04:29.638 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:29.638 CC lib/iscsi/task.o 00:04:29.896 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:29.896 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:29.896 LIB libspdk_vhost.a 00:04:29.896 CC lib/ftl/utils/ftl_conf.o 00:04:29.896 CC lib/ftl/utils/ftl_md.o 00:04:29.896 SO libspdk_vhost.so.8.0 00:04:29.896 CC lib/ftl/utils/ftl_mempool.o 00:04:30.155 CC lib/ftl/utils/ftl_bitmap.o 00:04:30.155 SYMLINK libspdk_vhost.so 00:04:30.155 CC lib/ftl/utils/ftl_property.o 00:04:30.155 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:30.155 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:30.155 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:30.155 LIB libspdk_iscsi.a 00:04:30.155 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:30.155 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:30.155 SO libspdk_iscsi.so.8.0 00:04:30.414 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:30.414 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:30.414 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:30.414 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:30.414 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:30.414 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:30.414 SYMLINK libspdk_iscsi.so 00:04:30.414 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:30.414 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:30.414 CC lib/ftl/base/ftl_base_dev.o 00:04:30.672 CC lib/ftl/base/ftl_base_bdev.o 00:04:30.672 CC lib/ftl/ftl_trace.o 00:04:30.932 LIB libspdk_ftl.a 00:04:31.191 SO libspdk_ftl.so.9.0 00:04:31.191 LIB libspdk_nvmf.a 00:04:31.449 SO libspdk_nvmf.so.20.0 00:04:31.449 SYMLINK libspdk_ftl.so 00:04:31.708 SYMLINK libspdk_nvmf.so 00:04:32.276 CC module/env_dpdk/env_dpdk_rpc.o 00:04:32.276 CC module/keyring/file/keyring.o 00:04:32.276 CC module/sock/posix/posix.o 00:04:32.276 CC module/keyring/linux/keyring.o 00:04:32.276 CC module/accel/ioat/accel_ioat.o 00:04:32.276 CC module/fsdev/aio/fsdev_aio.o 00:04:32.276 CC module/accel/error/accel_error.o 00:04:32.276 CC module/accel/dsa/accel_dsa.o 00:04:32.276 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:32.276 CC module/blob/bdev/blob_bdev.o 00:04:32.276 LIB libspdk_env_dpdk_rpc.a 00:04:32.276 SO libspdk_env_dpdk_rpc.so.6.0 00:04:32.276 CC module/keyring/linux/keyring_rpc.o 00:04:32.276 SYMLINK libspdk_env_dpdk_rpc.so 00:04:32.276 CC module/accel/ioat/accel_ioat_rpc.o 00:04:32.536 CC module/keyring/file/keyring_rpc.o 00:04:32.536 LIB libspdk_scheduler_dynamic.a 00:04:32.536 SO libspdk_scheduler_dynamic.so.4.0 00:04:32.536 CC module/accel/error/accel_error_rpc.o 00:04:32.536 CC module/accel/dsa/accel_dsa_rpc.o 00:04:32.536 LIB libspdk_keyring_linux.a 00:04:32.536 LIB libspdk_accel_ioat.a 00:04:32.536 LIB libspdk_keyring_file.a 00:04:32.536 SYMLINK libspdk_scheduler_dynamic.so 00:04:32.536 SO libspdk_keyring_linux.so.1.0 00:04:32.536 SO libspdk_accel_ioat.so.6.0 00:04:32.536 SO libspdk_keyring_file.so.2.0 00:04:32.536 LIB libspdk_blob_bdev.a 00:04:32.536 CC module/accel/iaa/accel_iaa.o 00:04:32.536 SO libspdk_blob_bdev.so.12.0 00:04:32.536 SYMLINK libspdk_keyring_file.so 00:04:32.536 SYMLINK libspdk_accel_ioat.so 00:04:32.536 SYMLINK libspdk_keyring_linux.so 00:04:32.536 LIB libspdk_accel_error.a 00:04:32.796 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:32.796 CC module/fsdev/aio/linux_aio_mgr.o 00:04:32.796 SO libspdk_accel_error.so.2.0 00:04:32.796 LIB libspdk_accel_dsa.a 00:04:32.796 SYMLINK libspdk_blob_bdev.so 00:04:32.796 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:32.796 SO libspdk_accel_dsa.so.5.0 00:04:32.796 SYMLINK libspdk_accel_error.so 00:04:32.796 CC module/accel/iaa/accel_iaa_rpc.o 00:04:32.796 SYMLINK libspdk_accel_dsa.so 00:04:32.796 CC module/scheduler/gscheduler/gscheduler.o 00:04:32.796 LIB libspdk_accel_iaa.a 00:04:32.796 LIB libspdk_scheduler_dpdk_governor.a 00:04:33.056 SO libspdk_accel_iaa.so.3.0 00:04:33.056 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:33.056 CC module/bdev/delay/vbdev_delay.o 00:04:33.056 CC module/bdev/error/vbdev_error.o 00:04:33.056 CC module/bdev/gpt/gpt.o 00:04:33.056 LIB libspdk_scheduler_gscheduler.a 00:04:33.056 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:33.056 CC module/bdev/gpt/vbdev_gpt.o 00:04:33.056 SYMLINK libspdk_accel_iaa.so 00:04:33.056 CC module/bdev/error/vbdev_error_rpc.o 00:04:33.056 SO libspdk_scheduler_gscheduler.so.4.0 00:04:33.056 CC module/bdev/lvol/vbdev_lvol.o 00:04:33.056 CC module/blobfs/bdev/blobfs_bdev.o 00:04:33.056 LIB libspdk_fsdev_aio.a 00:04:33.316 SYMLINK libspdk_scheduler_gscheduler.so 00:04:33.316 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:33.316 LIB libspdk_sock_posix.a 00:04:33.316 SO libspdk_fsdev_aio.so.1.0 00:04:33.316 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:33.316 SO libspdk_sock_posix.so.6.0 00:04:33.316 SYMLINK libspdk_fsdev_aio.so 00:04:33.316 LIB libspdk_bdev_error.a 00:04:33.316 LIB libspdk_bdev_gpt.a 00:04:33.316 SO libspdk_bdev_error.so.6.0 00:04:33.316 SYMLINK libspdk_sock_posix.so 00:04:33.316 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:33.316 SO libspdk_bdev_gpt.so.6.0 00:04:33.316 LIB libspdk_blobfs_bdev.a 00:04:33.316 CC module/bdev/malloc/bdev_malloc.o 00:04:33.316 SO libspdk_blobfs_bdev.so.6.0 00:04:33.577 SYMLINK libspdk_bdev_error.so 00:04:33.577 SYMLINK libspdk_bdev_gpt.so 00:04:33.577 CC module/bdev/null/bdev_null.o 00:04:33.577 CC module/bdev/null/bdev_null_rpc.o 00:04:33.577 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:33.577 SYMLINK libspdk_blobfs_bdev.so 00:04:33.577 CC module/bdev/nvme/bdev_nvme.o 00:04:33.577 LIB libspdk_bdev_delay.a 00:04:33.577 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:33.577 CC module/bdev/passthru/vbdev_passthru.o 00:04:33.577 SO libspdk_bdev_delay.so.6.0 00:04:33.577 LIB libspdk_bdev_lvol.a 00:04:33.577 CC module/bdev/raid/bdev_raid.o 00:04:33.577 SO libspdk_bdev_lvol.so.6.0 00:04:33.577 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:33.835 SYMLINK libspdk_bdev_delay.so 00:04:33.835 CC module/bdev/nvme/nvme_rpc.o 00:04:33.835 CC module/bdev/nvme/bdev_mdns_client.o 00:04:33.835 SYMLINK libspdk_bdev_lvol.so 00:04:33.835 CC module/bdev/raid/bdev_raid_rpc.o 00:04:33.835 LIB libspdk_bdev_null.a 00:04:33.835 SO libspdk_bdev_null.so.6.0 00:04:33.835 CC module/bdev/nvme/vbdev_opal.o 00:04:33.835 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:33.835 LIB libspdk_bdev_malloc.a 00:04:33.835 SYMLINK libspdk_bdev_null.so 00:04:33.835 SO libspdk_bdev_malloc.so.6.0 00:04:34.094 LIB libspdk_bdev_passthru.a 00:04:34.094 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:34.094 SO libspdk_bdev_passthru.so.6.0 00:04:34.094 CC module/bdev/raid/bdev_raid_sb.o 00:04:34.094 SYMLINK libspdk_bdev_malloc.so 00:04:34.094 CC module/bdev/split/vbdev_split.o 00:04:34.094 SYMLINK libspdk_bdev_passthru.so 00:04:34.094 CC module/bdev/split/vbdev_split_rpc.o 00:04:34.094 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:34.355 CC module/bdev/aio/bdev_aio.o 00:04:34.355 CC module/bdev/ftl/bdev_ftl.o 00:04:34.355 CC module/bdev/aio/bdev_aio_rpc.o 00:04:34.355 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:34.355 CC module/bdev/raid/raid0.o 00:04:34.355 CC module/bdev/iscsi/bdev_iscsi.o 00:04:34.355 LIB libspdk_bdev_split.a 00:04:34.355 SO libspdk_bdev_split.so.6.0 00:04:34.615 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:34.615 SYMLINK libspdk_bdev_split.so 00:04:34.615 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:34.615 LIB libspdk_bdev_ftl.a 00:04:34.615 CC module/bdev/raid/raid1.o 00:04:34.615 CC module/bdev/raid/concat.o 00:04:34.615 SO libspdk_bdev_ftl.so.6.0 00:04:34.615 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:34.615 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:34.615 SYMLINK libspdk_bdev_ftl.so 00:04:34.875 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:34.875 LIB libspdk_bdev_aio.a 00:04:34.875 LIB libspdk_bdev_zone_block.a 00:04:34.875 SO libspdk_bdev_aio.so.6.0 00:04:34.875 SO libspdk_bdev_zone_block.so.6.0 00:04:34.875 LIB libspdk_bdev_iscsi.a 00:04:34.875 SO libspdk_bdev_iscsi.so.6.0 00:04:34.875 SYMLINK libspdk_bdev_aio.so 00:04:34.875 CC module/bdev/raid/raid5f.o 00:04:34.875 SYMLINK libspdk_bdev_zone_block.so 00:04:34.875 SYMLINK libspdk_bdev_iscsi.so 00:04:35.470 LIB libspdk_bdev_virtio.a 00:04:35.470 SO libspdk_bdev_virtio.so.6.0 00:04:35.470 SYMLINK libspdk_bdev_virtio.so 00:04:35.470 LIB libspdk_bdev_raid.a 00:04:35.470 SO libspdk_bdev_raid.so.6.0 00:04:35.730 SYMLINK libspdk_bdev_raid.so 00:04:37.111 LIB libspdk_bdev_nvme.a 00:04:37.111 SO libspdk_bdev_nvme.so.7.1 00:04:37.111 SYMLINK libspdk_bdev_nvme.so 00:04:38.052 CC module/event/subsystems/iobuf/iobuf.o 00:04:38.052 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:38.052 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:38.052 CC module/event/subsystems/vmd/vmd.o 00:04:38.052 CC module/event/subsystems/sock/sock.o 00:04:38.052 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:38.052 CC module/event/subsystems/keyring/keyring.o 00:04:38.052 CC module/event/subsystems/scheduler/scheduler.o 00:04:38.052 CC module/event/subsystems/fsdev/fsdev.o 00:04:38.052 LIB libspdk_event_keyring.a 00:04:38.052 LIB libspdk_event_vhost_blk.a 00:04:38.052 LIB libspdk_event_scheduler.a 00:04:38.052 LIB libspdk_event_sock.a 00:04:38.052 LIB libspdk_event_vmd.a 00:04:38.052 LIB libspdk_event_iobuf.a 00:04:38.052 LIB libspdk_event_fsdev.a 00:04:38.052 SO libspdk_event_keyring.so.1.0 00:04:38.052 SO libspdk_event_vhost_blk.so.3.0 00:04:38.052 SO libspdk_event_scheduler.so.4.0 00:04:38.052 SO libspdk_event_sock.so.5.0 00:04:38.052 SO libspdk_event_fsdev.so.1.0 00:04:38.052 SO libspdk_event_vmd.so.6.0 00:04:38.052 SO libspdk_event_iobuf.so.3.0 00:04:38.052 SYMLINK libspdk_event_keyring.so 00:04:38.052 SYMLINK libspdk_event_sock.so 00:04:38.052 SYMLINK libspdk_event_vhost_blk.so 00:04:38.052 SYMLINK libspdk_event_scheduler.so 00:04:38.052 SYMLINK libspdk_event_fsdev.so 00:04:38.052 SYMLINK libspdk_event_vmd.so 00:04:38.052 SYMLINK libspdk_event_iobuf.so 00:04:38.620 CC module/event/subsystems/accel/accel.o 00:04:38.620 LIB libspdk_event_accel.a 00:04:38.620 SO libspdk_event_accel.so.6.0 00:04:38.880 SYMLINK libspdk_event_accel.so 00:04:39.141 CC module/event/subsystems/bdev/bdev.o 00:04:39.401 LIB libspdk_event_bdev.a 00:04:39.401 SO libspdk_event_bdev.so.6.0 00:04:39.662 SYMLINK libspdk_event_bdev.so 00:04:39.922 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:39.922 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:39.922 CC module/event/subsystems/ublk/ublk.o 00:04:39.922 CC module/event/subsystems/scsi/scsi.o 00:04:39.922 CC module/event/subsystems/nbd/nbd.o 00:04:39.922 LIB libspdk_event_ublk.a 00:04:39.922 LIB libspdk_event_nbd.a 00:04:40.181 LIB libspdk_event_scsi.a 00:04:40.181 SO libspdk_event_ublk.so.3.0 00:04:40.181 SO libspdk_event_scsi.so.6.0 00:04:40.181 SO libspdk_event_nbd.so.6.0 00:04:40.181 LIB libspdk_event_nvmf.a 00:04:40.181 SYMLINK libspdk_event_scsi.so 00:04:40.181 SYMLINK libspdk_event_ublk.so 00:04:40.181 SYMLINK libspdk_event_nbd.so 00:04:40.181 SO libspdk_event_nvmf.so.6.0 00:04:40.181 SYMLINK libspdk_event_nvmf.so 00:04:40.440 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:40.440 CC module/event/subsystems/iscsi/iscsi.o 00:04:40.699 LIB libspdk_event_vhost_scsi.a 00:04:40.699 LIB libspdk_event_iscsi.a 00:04:40.699 SO libspdk_event_vhost_scsi.so.3.0 00:04:40.699 SO libspdk_event_iscsi.so.6.0 00:04:40.699 SYMLINK libspdk_event_vhost_scsi.so 00:04:40.959 SYMLINK libspdk_event_iscsi.so 00:04:40.959 SO libspdk.so.6.0 00:04:40.959 SYMLINK libspdk.so 00:04:41.527 CC app/spdk_nvme_identify/identify.o 00:04:41.527 CC app/trace_record/trace_record.o 00:04:41.527 CC app/spdk_nvme_perf/perf.o 00:04:41.527 CXX app/trace/trace.o 00:04:41.527 CC app/spdk_lspci/spdk_lspci.o 00:04:41.527 CC app/nvmf_tgt/nvmf_main.o 00:04:41.527 CC app/iscsi_tgt/iscsi_tgt.o 00:04:41.527 CC app/spdk_tgt/spdk_tgt.o 00:04:41.527 CC examples/util/zipf/zipf.o 00:04:41.527 CC test/thread/poller_perf/poller_perf.o 00:04:41.527 LINK spdk_lspci 00:04:41.527 LINK nvmf_tgt 00:04:41.786 LINK zipf 00:04:41.786 LINK iscsi_tgt 00:04:41.786 LINK poller_perf 00:04:41.786 LINK spdk_tgt 00:04:41.786 LINK spdk_trace_record 00:04:41.786 LINK spdk_trace 00:04:41.786 CC app/spdk_nvme_discover/discovery_aer.o 00:04:42.045 CC app/spdk_top/spdk_top.o 00:04:42.045 CC app/spdk_dd/spdk_dd.o 00:04:42.045 CC examples/ioat/perf/perf.o 00:04:42.045 LINK spdk_nvme_discover 00:04:42.045 CC test/dma/test_dma/test_dma.o 00:04:42.304 CC examples/ioat/verify/verify.o 00:04:42.304 CC app/fio/nvme/fio_plugin.o 00:04:42.304 CC examples/vmd/lsvmd/lsvmd.o 00:04:42.304 LINK ioat_perf 00:04:42.304 LINK lsvmd 00:04:42.563 LINK verify 00:04:42.563 CC app/vhost/vhost.o 00:04:42.563 LINK spdk_nvme_perf 00:04:42.563 LINK spdk_dd 00:04:42.563 LINK spdk_nvme_identify 00:04:42.823 CC examples/vmd/led/led.o 00:04:42.823 LINK vhost 00:04:42.823 LINK test_dma 00:04:42.823 CC examples/idxd/perf/perf.o 00:04:42.823 CC test/app/bdev_svc/bdev_svc.o 00:04:42.823 LINK led 00:04:42.823 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:43.083 LINK spdk_nvme 00:04:43.083 CC examples/thread/thread/thread_ex.o 00:04:43.083 CC examples/sock/hello_world/hello_sock.o 00:04:43.083 LINK bdev_svc 00:04:43.083 LINK interrupt_tgt 00:04:43.083 CC test/app/histogram_perf/histogram_perf.o 00:04:43.343 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:43.343 CC app/fio/bdev/fio_plugin.o 00:04:43.343 CC test/app/jsoncat/jsoncat.o 00:04:43.343 LINK idxd_perf 00:04:43.343 LINK spdk_top 00:04:43.343 LINK thread 00:04:43.343 LINK hello_sock 00:04:43.343 LINK histogram_perf 00:04:43.343 LINK jsoncat 00:04:43.602 CC test/app/stub/stub.o 00:04:43.602 TEST_HEADER include/spdk/accel.h 00:04:43.602 TEST_HEADER include/spdk/accel_module.h 00:04:43.602 TEST_HEADER include/spdk/assert.h 00:04:43.602 TEST_HEADER include/spdk/barrier.h 00:04:43.602 TEST_HEADER include/spdk/base64.h 00:04:43.602 TEST_HEADER include/spdk/bdev.h 00:04:43.602 TEST_HEADER include/spdk/bdev_module.h 00:04:43.602 TEST_HEADER include/spdk/bdev_zone.h 00:04:43.602 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:43.602 TEST_HEADER include/spdk/bit_array.h 00:04:43.602 TEST_HEADER include/spdk/bit_pool.h 00:04:43.602 TEST_HEADER include/spdk/blob_bdev.h 00:04:43.602 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:43.602 TEST_HEADER include/spdk/blobfs.h 00:04:43.602 TEST_HEADER include/spdk/blob.h 00:04:43.602 TEST_HEADER include/spdk/conf.h 00:04:43.602 TEST_HEADER include/spdk/config.h 00:04:43.602 TEST_HEADER include/spdk/cpuset.h 00:04:43.602 TEST_HEADER include/spdk/crc16.h 00:04:43.602 TEST_HEADER include/spdk/crc32.h 00:04:43.602 TEST_HEADER include/spdk/crc64.h 00:04:43.602 TEST_HEADER include/spdk/dif.h 00:04:43.602 TEST_HEADER include/spdk/dma.h 00:04:43.602 TEST_HEADER include/spdk/endian.h 00:04:43.602 TEST_HEADER include/spdk/env_dpdk.h 00:04:43.602 TEST_HEADER include/spdk/env.h 00:04:43.602 TEST_HEADER include/spdk/event.h 00:04:43.602 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:43.602 TEST_HEADER include/spdk/fd_group.h 00:04:43.602 TEST_HEADER include/spdk/fd.h 00:04:43.602 TEST_HEADER include/spdk/file.h 00:04:43.602 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:43.602 TEST_HEADER include/spdk/fsdev.h 00:04:43.603 TEST_HEADER include/spdk/fsdev_module.h 00:04:43.603 TEST_HEADER include/spdk/ftl.h 00:04:43.603 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:43.603 TEST_HEADER include/spdk/gpt_spec.h 00:04:43.603 TEST_HEADER include/spdk/hexlify.h 00:04:43.603 TEST_HEADER include/spdk/histogram_data.h 00:04:43.603 TEST_HEADER include/spdk/idxd.h 00:04:43.603 TEST_HEADER include/spdk/idxd_spec.h 00:04:43.603 TEST_HEADER include/spdk/init.h 00:04:43.603 TEST_HEADER include/spdk/ioat.h 00:04:43.603 TEST_HEADER include/spdk/ioat_spec.h 00:04:43.603 TEST_HEADER include/spdk/iscsi_spec.h 00:04:43.603 TEST_HEADER include/spdk/json.h 00:04:43.603 TEST_HEADER include/spdk/jsonrpc.h 00:04:43.603 TEST_HEADER include/spdk/keyring.h 00:04:43.603 TEST_HEADER include/spdk/keyring_module.h 00:04:43.603 TEST_HEADER include/spdk/likely.h 00:04:43.603 TEST_HEADER include/spdk/log.h 00:04:43.603 TEST_HEADER include/spdk/lvol.h 00:04:43.603 TEST_HEADER include/spdk/md5.h 00:04:43.603 TEST_HEADER include/spdk/memory.h 00:04:43.603 TEST_HEADER include/spdk/mmio.h 00:04:43.603 TEST_HEADER include/spdk/nbd.h 00:04:43.603 TEST_HEADER include/spdk/net.h 00:04:43.603 TEST_HEADER include/spdk/notify.h 00:04:43.603 TEST_HEADER include/spdk/nvme.h 00:04:43.603 TEST_HEADER include/spdk/nvme_intel.h 00:04:43.603 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:43.603 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:43.603 TEST_HEADER include/spdk/nvme_spec.h 00:04:43.603 TEST_HEADER include/spdk/nvme_zns.h 00:04:43.603 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:43.603 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:43.603 TEST_HEADER include/spdk/nvmf.h 00:04:43.603 TEST_HEADER include/spdk/nvmf_spec.h 00:04:43.603 TEST_HEADER include/spdk/nvmf_transport.h 00:04:43.603 TEST_HEADER include/spdk/opal.h 00:04:43.603 TEST_HEADER include/spdk/opal_spec.h 00:04:43.603 TEST_HEADER include/spdk/pci_ids.h 00:04:43.603 TEST_HEADER include/spdk/pipe.h 00:04:43.603 TEST_HEADER include/spdk/queue.h 00:04:43.603 TEST_HEADER include/spdk/reduce.h 00:04:43.603 TEST_HEADER include/spdk/rpc.h 00:04:43.603 TEST_HEADER include/spdk/scheduler.h 00:04:43.603 TEST_HEADER include/spdk/scsi.h 00:04:43.603 TEST_HEADER include/spdk/scsi_spec.h 00:04:43.603 TEST_HEADER include/spdk/sock.h 00:04:43.603 TEST_HEADER include/spdk/stdinc.h 00:04:43.603 TEST_HEADER include/spdk/string.h 00:04:43.603 TEST_HEADER include/spdk/thread.h 00:04:43.603 TEST_HEADER include/spdk/trace.h 00:04:43.603 TEST_HEADER include/spdk/trace_parser.h 00:04:43.603 TEST_HEADER include/spdk/tree.h 00:04:43.603 TEST_HEADER include/spdk/ublk.h 00:04:43.603 TEST_HEADER include/spdk/util.h 00:04:43.603 TEST_HEADER include/spdk/uuid.h 00:04:43.603 TEST_HEADER include/spdk/version.h 00:04:43.603 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:43.603 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:43.603 TEST_HEADER include/spdk/vhost.h 00:04:43.603 TEST_HEADER include/spdk/vmd.h 00:04:43.603 TEST_HEADER include/spdk/xor.h 00:04:43.603 LINK stub 00:04:43.603 TEST_HEADER include/spdk/zipf.h 00:04:43.603 CXX test/cpp_headers/accel.o 00:04:43.863 LINK nvme_fuzz 00:04:43.863 CC examples/nvme/hello_world/hello_world.o 00:04:43.863 CC test/event/event_perf/event_perf.o 00:04:43.863 CC test/env/vtophys/vtophys.o 00:04:43.863 LINK spdk_bdev 00:04:43.863 CC test/env/mem_callbacks/mem_callbacks.o 00:04:43.863 CXX test/cpp_headers/accel_module.o 00:04:43.863 CXX test/cpp_headers/assert.o 00:04:43.863 LINK event_perf 00:04:44.123 CXX test/cpp_headers/barrier.o 00:04:44.123 LINK vtophys 00:04:44.123 LINK hello_world 00:04:44.123 LINK vhost_fuzz 00:04:44.123 CC test/event/reactor/reactor.o 00:04:44.123 CXX test/cpp_headers/base64.o 00:04:44.123 CXX test/cpp_headers/bdev.o 00:04:44.123 CC test/event/reactor_perf/reactor_perf.o 00:04:44.383 CC test/event/app_repeat/app_repeat.o 00:04:44.383 LINK reactor 00:04:44.383 CC examples/accel/perf/accel_perf.o 00:04:44.383 CC examples/nvme/reconnect/reconnect.o 00:04:44.383 CXX test/cpp_headers/bdev_module.o 00:04:44.383 LINK reactor_perf 00:04:44.383 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:44.383 LINK app_repeat 00:04:44.383 LINK mem_callbacks 00:04:44.643 CC test/event/scheduler/scheduler.o 00:04:44.643 CC examples/blob/hello_world/hello_blob.o 00:04:44.643 CXX test/cpp_headers/bdev_zone.o 00:04:44.643 CC examples/blob/cli/blobcli.o 00:04:44.903 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:44.903 LINK scheduler 00:04:44.903 LINK reconnect 00:04:44.903 CC examples/nvme/arbitration/arbitration.o 00:04:44.903 CXX test/cpp_headers/bit_array.o 00:04:44.903 LINK hello_blob 00:04:44.903 LINK env_dpdk_post_init 00:04:44.903 LINK accel_perf 00:04:45.164 CXX test/cpp_headers/bit_pool.o 00:04:45.164 LINK nvme_manage 00:04:45.164 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:45.164 CXX test/cpp_headers/blob_bdev.o 00:04:45.164 CC test/nvme/aer/aer.o 00:04:45.164 LINK arbitration 00:04:45.164 CC test/env/memory/memory_ut.o 00:04:45.164 CC test/env/pci/pci_ut.o 00:04:45.424 LINK blobcli 00:04:45.424 CC examples/nvme/hotplug/hotplug.o 00:04:45.424 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.424 CC examples/bdev/hello_world/hello_bdev.o 00:04:45.424 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:45.424 LINK hello_fsdev 00:04:45.684 LINK aer 00:04:45.684 CXX test/cpp_headers/blobfs.o 00:04:45.684 LINK iscsi_fuzz 00:04:45.684 LINK hotplug 00:04:45.684 LINK hello_bdev 00:04:45.684 CC examples/nvme/abort/abort.o 00:04:45.684 LINK cmb_copy 00:04:45.684 LINK pci_ut 00:04:45.945 CXX test/cpp_headers/blob.o 00:04:45.945 CC test/nvme/reset/reset.o 00:04:45.945 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:45.945 CXX test/cpp_headers/conf.o 00:04:45.945 CC test/rpc_client/rpc_client_test.o 00:04:46.205 CXX test/cpp_headers/config.o 00:04:46.205 LINK pmr_persistence 00:04:46.205 CXX test/cpp_headers/cpuset.o 00:04:46.205 CC examples/bdev/bdevperf/bdevperf.o 00:04:46.205 CC test/nvme/sgl/sgl.o 00:04:46.205 CXX test/cpp_headers/crc16.o 00:04:46.205 CC test/accel/dif/dif.o 00:04:46.205 LINK reset 00:04:46.205 LINK abort 00:04:46.205 LINK rpc_client_test 00:04:46.205 CXX test/cpp_headers/crc32.o 00:04:46.465 CXX test/cpp_headers/crc64.o 00:04:46.465 LINK sgl 00:04:46.465 CC test/blobfs/mkfs/mkfs.o 00:04:46.465 LINK memory_ut 00:04:46.465 CC test/nvme/e2edp/nvme_dp.o 00:04:46.465 CC test/lvol/esnap/esnap.o 00:04:46.465 CC test/nvme/overhead/overhead.o 00:04:46.465 CXX test/cpp_headers/dif.o 00:04:46.465 CC test/nvme/err_injection/err_injection.o 00:04:46.726 LINK mkfs 00:04:46.726 CXX test/cpp_headers/dma.o 00:04:46.726 CC test/nvme/startup/startup.o 00:04:46.726 LINK err_injection 00:04:46.987 LINK overhead 00:04:46.987 LINK nvme_dp 00:04:46.987 CC test/nvme/reserve/reserve.o 00:04:46.987 CXX test/cpp_headers/endian.o 00:04:46.987 LINK startup 00:04:46.987 CXX test/cpp_headers/env_dpdk.o 00:04:46.987 LINK dif 00:04:46.987 CC test/nvme/simple_copy/simple_copy.o 00:04:46.987 CXX test/cpp_headers/env.o 00:04:47.246 LINK bdevperf 00:04:47.246 CXX test/cpp_headers/event.o 00:04:47.246 LINK reserve 00:04:47.246 CC test/nvme/connect_stress/connect_stress.o 00:04:47.246 CC test/nvme/boot_partition/boot_partition.o 00:04:47.246 CC test/nvme/compliance/nvme_compliance.o 00:04:47.246 LINK simple_copy 00:04:47.246 CXX test/cpp_headers/fd_group.o 00:04:47.506 LINK boot_partition 00:04:47.506 CXX test/cpp_headers/fd.o 00:04:47.506 LINK connect_stress 00:04:47.506 CC test/nvme/fused_ordering/fused_ordering.o 00:04:47.506 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:47.506 CXX test/cpp_headers/file.o 00:04:47.506 CC examples/nvmf/nvmf/nvmf.o 00:04:47.506 CXX test/cpp_headers/fsdev.o 00:04:47.506 LINK doorbell_aers 00:04:47.765 CC test/nvme/cuse/cuse.o 00:04:47.765 LINK fused_ordering 00:04:47.765 CC test/nvme/fdp/fdp.o 00:04:47.765 LINK nvme_compliance 00:04:47.765 CXX test/cpp_headers/fsdev_module.o 00:04:47.765 CXX test/cpp_headers/ftl.o 00:04:47.765 CC test/bdev/bdevio/bdevio.o 00:04:47.765 CXX test/cpp_headers/fuse_dispatcher.o 00:04:47.765 CXX test/cpp_headers/gpt_spec.o 00:04:47.765 CXX test/cpp_headers/hexlify.o 00:04:48.024 CXX test/cpp_headers/histogram_data.o 00:04:48.024 LINK nvmf 00:04:48.024 CXX test/cpp_headers/idxd.o 00:04:48.024 CXX test/cpp_headers/idxd_spec.o 00:04:48.024 CXX test/cpp_headers/init.o 00:04:48.024 CXX test/cpp_headers/ioat.o 00:04:48.024 LINK fdp 00:04:48.024 CXX test/cpp_headers/ioat_spec.o 00:04:48.283 CXX test/cpp_headers/iscsi_spec.o 00:04:48.283 CXX test/cpp_headers/json.o 00:04:48.283 CXX test/cpp_headers/jsonrpc.o 00:04:48.283 CXX test/cpp_headers/keyring.o 00:04:48.283 CXX test/cpp_headers/keyring_module.o 00:04:48.283 LINK bdevio 00:04:48.283 CXX test/cpp_headers/likely.o 00:04:48.283 CXX test/cpp_headers/log.o 00:04:48.283 CXX test/cpp_headers/lvol.o 00:04:48.283 CXX test/cpp_headers/md5.o 00:04:48.283 CXX test/cpp_headers/memory.o 00:04:48.283 CXX test/cpp_headers/mmio.o 00:04:48.541 CXX test/cpp_headers/nbd.o 00:04:48.541 CXX test/cpp_headers/net.o 00:04:48.541 CXX test/cpp_headers/notify.o 00:04:48.541 CXX test/cpp_headers/nvme.o 00:04:48.541 CXX test/cpp_headers/nvme_intel.o 00:04:48.541 CXX test/cpp_headers/nvme_ocssd.o 00:04:48.541 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:48.541 CXX test/cpp_headers/nvme_spec.o 00:04:48.541 CXX test/cpp_headers/nvme_zns.o 00:04:48.541 CXX test/cpp_headers/nvmf_cmd.o 00:04:48.801 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:48.801 CXX test/cpp_headers/nvmf.o 00:04:48.801 CXX test/cpp_headers/nvmf_spec.o 00:04:48.801 CXX test/cpp_headers/nvmf_transport.o 00:04:48.801 CXX test/cpp_headers/opal.o 00:04:48.801 CXX test/cpp_headers/opal_spec.o 00:04:48.801 CXX test/cpp_headers/pci_ids.o 00:04:48.801 CXX test/cpp_headers/pipe.o 00:04:48.801 CXX test/cpp_headers/queue.o 00:04:48.801 CXX test/cpp_headers/reduce.o 00:04:49.061 CXX test/cpp_headers/rpc.o 00:04:49.061 CXX test/cpp_headers/scheduler.o 00:04:49.061 CXX test/cpp_headers/scsi.o 00:04:49.061 CXX test/cpp_headers/scsi_spec.o 00:04:49.061 CXX test/cpp_headers/sock.o 00:04:49.061 CXX test/cpp_headers/stdinc.o 00:04:49.061 CXX test/cpp_headers/string.o 00:04:49.061 LINK cuse 00:04:49.061 CXX test/cpp_headers/thread.o 00:04:49.061 CXX test/cpp_headers/trace.o 00:04:49.061 CXX test/cpp_headers/trace_parser.o 00:04:49.061 CXX test/cpp_headers/tree.o 00:04:49.319 CXX test/cpp_headers/ublk.o 00:04:49.319 CXX test/cpp_headers/util.o 00:04:49.319 CXX test/cpp_headers/uuid.o 00:04:49.319 CXX test/cpp_headers/version.o 00:04:49.319 CXX test/cpp_headers/vfio_user_pci.o 00:04:49.319 CXX test/cpp_headers/vfio_user_spec.o 00:04:49.319 CXX test/cpp_headers/vhost.o 00:04:49.319 CXX test/cpp_headers/vmd.o 00:04:49.319 CXX test/cpp_headers/xor.o 00:04:49.319 CXX test/cpp_headers/zipf.o 00:04:53.518 LINK esnap 00:04:53.777 00:04:53.777 real 1m38.717s 00:04:53.777 user 8m26.407s 00:04:53.777 sys 1m51.925s 00:04:53.777 09:41:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:53.777 09:41:54 make -- common/autotest_common.sh@10 -- $ set +x 00:04:53.777 ************************************ 00:04:53.777 END TEST make 00:04:53.777 ************************************ 00:04:53.777 09:41:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:53.777 09:41:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:53.777 09:41:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:53.777 09:41:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.777 09:41:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:53.777 09:41:54 -- pm/common@44 -- $ pid=5481 00:04:53.777 09:41:54 -- pm/common@50 -- $ kill -TERM 5481 00:04:53.777 09:41:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:53.777 09:41:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:53.777 09:41:54 -- pm/common@44 -- $ pid=5483 00:04:53.777 09:41:54 -- pm/common@50 -- $ kill -TERM 5483 00:04:53.777 09:41:54 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:53.777 09:41:54 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:54.038 09:41:54 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.038 09:41:54 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.038 09:41:54 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.038 09:41:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.038 09:41:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.038 09:41:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.038 09:41:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.038 09:41:55 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.038 09:41:55 -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.038 09:41:55 -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.038 09:41:55 -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.038 09:41:55 -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.038 09:41:55 -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.038 09:41:55 -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.038 09:41:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.038 09:41:55 -- scripts/common.sh@344 -- # case "$op" in 00:04:54.038 09:41:55 -- scripts/common.sh@345 -- # : 1 00:04:54.038 09:41:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.038 09:41:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.038 09:41:55 -- scripts/common.sh@365 -- # decimal 1 00:04:54.038 09:41:55 -- scripts/common.sh@353 -- # local d=1 00:04:54.038 09:41:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.038 09:41:55 -- scripts/common.sh@355 -- # echo 1 00:04:54.038 09:41:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.038 09:41:55 -- scripts/common.sh@366 -- # decimal 2 00:04:54.038 09:41:55 -- scripts/common.sh@353 -- # local d=2 00:04:54.038 09:41:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.038 09:41:55 -- scripts/common.sh@355 -- # echo 2 00:04:54.038 09:41:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.038 09:41:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.038 09:41:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.038 09:41:55 -- scripts/common.sh@368 -- # return 0 00:04:54.038 09:41:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.038 09:41:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.038 --rc genhtml_branch_coverage=1 00:04:54.038 --rc genhtml_function_coverage=1 00:04:54.038 --rc genhtml_legend=1 00:04:54.038 --rc geninfo_all_blocks=1 00:04:54.038 --rc geninfo_unexecuted_blocks=1 00:04:54.038 00:04:54.038 ' 00:04:54.038 09:41:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.038 --rc genhtml_branch_coverage=1 00:04:54.038 --rc genhtml_function_coverage=1 00:04:54.038 --rc genhtml_legend=1 00:04:54.038 --rc geninfo_all_blocks=1 00:04:54.038 --rc geninfo_unexecuted_blocks=1 00:04:54.038 00:04:54.038 ' 00:04:54.038 09:41:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.038 --rc genhtml_branch_coverage=1 00:04:54.038 --rc genhtml_function_coverage=1 00:04:54.038 --rc genhtml_legend=1 00:04:54.038 --rc geninfo_all_blocks=1 00:04:54.038 --rc geninfo_unexecuted_blocks=1 00:04:54.038 00:04:54.038 ' 00:04:54.038 09:41:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.038 --rc genhtml_branch_coverage=1 00:04:54.038 --rc genhtml_function_coverage=1 00:04:54.038 --rc genhtml_legend=1 00:04:54.038 --rc geninfo_all_blocks=1 00:04:54.038 --rc geninfo_unexecuted_blocks=1 00:04:54.038 00:04:54.038 ' 00:04:54.038 09:41:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:54.038 09:41:55 -- nvmf/common.sh@7 -- # uname -s 00:04:54.038 09:41:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:54.038 09:41:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:54.038 09:41:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:54.038 09:41:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:54.038 09:41:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:54.038 09:41:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:54.038 09:41:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:54.038 09:41:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:54.038 09:41:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:54.038 09:41:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:54.038 09:41:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:730d5460-0697-4866-b4bd-cde3bf211b9d 00:04:54.038 09:41:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=730d5460-0697-4866-b4bd-cde3bf211b9d 00:04:54.038 09:41:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:54.038 09:41:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:54.038 09:41:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:54.038 09:41:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:54.038 09:41:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:54.038 09:41:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:54.038 09:41:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:54.038 09:41:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:54.038 09:41:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:54.038 09:41:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.038 09:41:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.038 09:41:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.038 09:41:55 -- paths/export.sh@5 -- # export PATH 00:04:54.038 09:41:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:54.038 09:41:55 -- nvmf/common.sh@51 -- # : 0 00:04:54.038 09:41:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:54.038 09:41:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:54.038 09:41:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:54.038 09:41:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:54.038 09:41:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:54.038 09:41:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:54.038 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:54.038 09:41:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:54.038 09:41:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:54.038 09:41:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:54.038 09:41:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:54.038 09:41:55 -- spdk/autotest.sh@32 -- # uname -s 00:04:54.038 09:41:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:54.038 09:41:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:54.038 09:41:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:54.038 09:41:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:54.038 09:41:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:54.299 09:41:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:54.299 09:41:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:54.299 09:41:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:54.299 09:41:55 -- spdk/autotest.sh@48 -- # udevadm_pid=54602 00:04:54.299 09:41:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:54.299 09:41:55 -- pm/common@17 -- # local monitor 00:04:54.299 09:41:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.299 09:41:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:54.299 09:41:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:54.299 09:41:55 -- pm/common@25 -- # sleep 1 00:04:54.299 09:41:55 -- pm/common@21 -- # date +%s 00:04:54.299 09:41:55 -- pm/common@21 -- # date +%s 00:04:54.299 09:41:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732700515 00:04:54.299 09:41:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732700515 00:04:54.299 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732700515_collect-vmstat.pm.log 00:04:54.299 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732700515_collect-cpu-load.pm.log 00:04:55.239 09:41:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:55.239 09:41:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:55.239 09:41:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:55.239 09:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.239 09:41:56 -- spdk/autotest.sh@59 -- # create_test_list 00:04:55.239 09:41:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:55.239 09:41:56 -- common/autotest_common.sh@10 -- # set +x 00:04:55.239 09:41:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:55.239 09:41:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:55.239 09:41:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:55.239 09:41:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:55.239 09:41:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:55.239 09:41:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:55.239 09:41:56 -- common/autotest_common.sh@1457 -- # uname 00:04:55.239 09:41:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:55.239 09:41:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:55.239 09:41:56 -- common/autotest_common.sh@1477 -- # uname 00:04:55.239 09:41:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:55.239 09:41:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:55.239 09:41:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:55.498 lcov: LCOV version 1.15 00:04:55.499 09:41:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:10.390 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:10.390 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:28.526 09:42:27 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:28.526 09:42:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.526 09:42:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.526 09:42:27 -- spdk/autotest.sh@78 -- # rm -f 00:05:28.526 09:42:27 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.526 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:28.526 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:28.526 09:42:27 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:28.526 09:42:27 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:28.526 09:42:27 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:28.526 09:42:27 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:28.526 09:42:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:28.526 09:42:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:28.526 09:42:27 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:28.526 09:42:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:28.526 09:42:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:28.526 09:42:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:28.526 09:42:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:28.526 09:42:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:28.526 09:42:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:28.526 09:42:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:28.526 09:42:27 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:28.526 09:42:27 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:28.526 09:42:27 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:28.526 09:42:27 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:28.526 09:42:27 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:28.526 09:42:27 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.526 09:42:27 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.526 09:42:27 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:28.526 09:42:27 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:28.526 09:42:27 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:28.526 No valid GPT data, bailing 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # pt= 00:05:28.526 09:42:28 -- scripts/common.sh@395 -- # return 1 00:05:28.526 09:42:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:28.526 1+0 records in 00:05:28.526 1+0 records out 00:05:28.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00686519 s, 153 MB/s 00:05:28.526 09:42:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.526 09:42:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.526 09:42:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:28.526 09:42:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:28.526 09:42:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:28.526 No valid GPT data, bailing 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # pt= 00:05:28.526 09:42:28 -- scripts/common.sh@395 -- # return 1 00:05:28.526 09:42:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:28.526 1+0 records in 00:05:28.526 1+0 records out 00:05:28.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00675309 s, 155 MB/s 00:05:28.526 09:42:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.526 09:42:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.526 09:42:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:28.526 09:42:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:28.526 09:42:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:28.526 No valid GPT data, bailing 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # pt= 00:05:28.526 09:42:28 -- scripts/common.sh@395 -- # return 1 00:05:28.526 09:42:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:28.526 1+0 records in 00:05:28.526 1+0 records out 00:05:28.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00536244 s, 196 MB/s 00:05:28.526 09:42:28 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.526 09:42:28 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:28.526 09:42:28 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:28.526 09:42:28 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:28.526 09:42:28 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:28.526 No valid GPT data, bailing 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:28.526 09:42:28 -- scripts/common.sh@394 -- # pt= 00:05:28.526 09:42:28 -- scripts/common.sh@395 -- # return 1 00:05:28.526 09:42:28 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:28.526 1+0 records in 00:05:28.526 1+0 records out 00:05:28.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679244 s, 154 MB/s 00:05:28.526 09:42:28 -- spdk/autotest.sh@105 -- # sync 00:05:28.526 09:42:28 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:28.526 09:42:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:28.526 09:42:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.432 09:42:31 -- spdk/autotest.sh@111 -- # uname -s 00:05:30.432 09:42:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:30.432 09:42:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:30.432 09:42:31 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:31.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.003 Hugepages 00:05:31.003 node hugesize free / total 00:05:31.003 node0 1048576kB 0 / 0 00:05:31.003 node0 2048kB 0 / 0 00:05:31.003 00:05:31.003 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:31.263 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:31.263 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:31.263 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:31.263 09:42:32 -- spdk/autotest.sh@117 -- # uname -s 00:05:31.263 09:42:32 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:31.263 09:42:32 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:31.263 09:42:32 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:32.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.202 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.202 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.461 09:42:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:33.402 09:42:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:33.402 09:42:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:33.402 09:42:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:33.402 09:42:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:33.402 09:42:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:33.402 09:42:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:33.402 09:42:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.402 09:42:34 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:33.402 09:42:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:33.402 09:42:34 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:33.402 09:42:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:33.402 09:42:34 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.973 Waiting for block devices as requested 00:05:33.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.234 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:34.234 09:42:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:34.234 09:42:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:34.234 09:42:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:34.234 09:42:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:34.234 09:42:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:34.234 09:42:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1543 -- # continue 00:05:34.234 09:42:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:34.234 09:42:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:34.234 09:42:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:34.234 09:42:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:34.234 09:42:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:34.234 09:42:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:34.234 09:42:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:34.234 09:42:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:34.234 09:42:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:34.234 09:42:35 -- common/autotest_common.sh@1543 -- # continue 00:05:34.234 09:42:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:34.234 09:42:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.234 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.494 09:42:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:34.494 09:42:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.494 09:42:35 -- common/autotest_common.sh@10 -- # set +x 00:05:34.494 09:42:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.335 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.335 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.335 09:42:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:35.335 09:42:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.335 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.335 09:42:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:35.335 09:42:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:35.335 09:42:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:35.335 09:42:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:35.335 09:42:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:35.335 09:42:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:35.335 09:42:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:35.335 09:42:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:35.335 09:42:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:35.335 09:42:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:35.335 09:42:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.335 09:42:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.335 09:42:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:35.594 09:42:36 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:35.594 09:42:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:35.594 09:42:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:35.594 09:42:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:35.594 09:42:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:35.594 09:42:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.594 09:42:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:35.594 09:42:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:35.595 09:42:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:35.595 09:42:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:35.595 09:42:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:35.595 09:42:36 -- common/autotest_common.sh@1572 -- # return 0 00:05:35.595 09:42:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:35.595 09:42:36 -- common/autotest_common.sh@1580 -- # return 0 00:05:35.595 09:42:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:35.595 09:42:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:35.595 09:42:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.595 09:42:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:35.595 09:42:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:35.595 09:42:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.595 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.595 09:42:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:35.595 09:42:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:35.595 09:42:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.595 09:42:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.595 09:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:35.595 ************************************ 00:05:35.595 START TEST env 00:05:35.595 ************************************ 00:05:35.595 09:42:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:35.595 * Looking for test storage... 00:05:35.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:35.595 09:42:36 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.595 09:42:36 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.595 09:42:36 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.854 09:42:36 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.854 09:42:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.854 09:42:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.854 09:42:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.854 09:42:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.854 09:42:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.854 09:42:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.854 09:42:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.854 09:42:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.854 09:42:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.854 09:42:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.855 09:42:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.855 09:42:36 env -- scripts/common.sh@344 -- # case "$op" in 00:05:35.855 09:42:36 env -- scripts/common.sh@345 -- # : 1 00:05:35.855 09:42:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.855 09:42:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.855 09:42:36 env -- scripts/common.sh@365 -- # decimal 1 00:05:35.855 09:42:36 env -- scripts/common.sh@353 -- # local d=1 00:05:35.855 09:42:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.855 09:42:36 env -- scripts/common.sh@355 -- # echo 1 00:05:35.855 09:42:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.855 09:42:36 env -- scripts/common.sh@366 -- # decimal 2 00:05:35.855 09:42:36 env -- scripts/common.sh@353 -- # local d=2 00:05:35.855 09:42:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.855 09:42:36 env -- scripts/common.sh@355 -- # echo 2 00:05:35.855 09:42:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.855 09:42:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.855 09:42:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.855 09:42:36 env -- scripts/common.sh@368 -- # return 0 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.855 --rc genhtml_branch_coverage=1 00:05:35.855 --rc genhtml_function_coverage=1 00:05:35.855 --rc genhtml_legend=1 00:05:35.855 --rc geninfo_all_blocks=1 00:05:35.855 --rc geninfo_unexecuted_blocks=1 00:05:35.855 00:05:35.855 ' 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.855 --rc genhtml_branch_coverage=1 00:05:35.855 --rc genhtml_function_coverage=1 00:05:35.855 --rc genhtml_legend=1 00:05:35.855 --rc geninfo_all_blocks=1 00:05:35.855 --rc geninfo_unexecuted_blocks=1 00:05:35.855 00:05:35.855 ' 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.855 --rc genhtml_branch_coverage=1 00:05:35.855 --rc genhtml_function_coverage=1 00:05:35.855 --rc genhtml_legend=1 00:05:35.855 --rc geninfo_all_blocks=1 00:05:35.855 --rc geninfo_unexecuted_blocks=1 00:05:35.855 00:05:35.855 ' 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.855 --rc genhtml_branch_coverage=1 00:05:35.855 --rc genhtml_function_coverage=1 00:05:35.855 --rc genhtml_legend=1 00:05:35.855 --rc geninfo_all_blocks=1 00:05:35.855 --rc geninfo_unexecuted_blocks=1 00:05:35.855 00:05:35.855 ' 00:05:35.855 09:42:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.855 09:42:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.855 09:42:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:35.855 ************************************ 00:05:35.855 START TEST env_memory 00:05:35.855 ************************************ 00:05:35.855 09:42:36 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:35.855 00:05:35.855 00:05:35.855 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.855 http://cunit.sourceforge.net/ 00:05:35.855 00:05:35.855 00:05:35.855 Suite: memory 00:05:35.855 Test: alloc and free memory map ...[2024-11-27 09:42:36.878561] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:35.855 passed 00:05:35.855 Test: mem map translation ...[2024-11-27 09:42:36.922909] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:35.855 [2024-11-27 09:42:36.922990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:35.855 [2024-11-27 09:42:36.923069] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:35.855 [2024-11-27 09:42:36.923092] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:36.114 passed 00:05:36.114 Test: mem map registration ...[2024-11-27 09:42:36.994021] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:36.114 [2024-11-27 09:42:36.994088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:36.114 passed 00:05:36.114 Test: mem map adjacent registrations ...passed 00:05:36.114 00:05:36.114 Run Summary: Type Total Ran Passed Failed Inactive 00:05:36.114 suites 1 1 n/a 0 0 00:05:36.114 tests 4 4 4 0 0 00:05:36.114 asserts 152 152 152 0 n/a 00:05:36.114 00:05:36.114 Elapsed time = 0.246 seconds 00:05:36.114 00:05:36.114 real 0m0.297s 00:05:36.114 user 0m0.262s 00:05:36.114 sys 0m0.025s 00:05:36.115 09:42:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.115 09:42:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:36.115 ************************************ 00:05:36.115 END TEST env_memory 00:05:36.115 ************************************ 00:05:36.115 09:42:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.115 09:42:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.115 09:42:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.115 09:42:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:36.115 ************************************ 00:05:36.115 START TEST env_vtophys 00:05:36.115 ************************************ 00:05:36.115 09:42:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:36.115 EAL: lib.eal log level changed from notice to debug 00:05:36.115 EAL: Detected lcore 0 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 1 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 2 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 3 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 4 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 5 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 6 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 7 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 8 as core 0 on socket 0 00:05:36.115 EAL: Detected lcore 9 as core 0 on socket 0 00:05:36.115 EAL: Maximum logical cores by configuration: 128 00:05:36.115 EAL: Detected CPU lcores: 10 00:05:36.115 EAL: Detected NUMA nodes: 1 00:05:36.115 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:36.115 EAL: Detected shared linkage of DPDK 00:05:36.374 EAL: No shared files mode enabled, IPC will be disabled 00:05:36.374 EAL: Selected IOVA mode 'PA' 00:05:36.374 EAL: Probing VFIO support... 00:05:36.374 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.374 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:36.374 EAL: Ask a virtual area of 0x2e000 bytes 00:05:36.374 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:36.374 EAL: Setting up physically contiguous memory... 00:05:36.374 EAL: Setting maximum number of open files to 524288 00:05:36.374 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:36.374 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:36.374 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.374 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:36.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.374 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.374 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:36.374 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:36.374 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.374 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:36.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.374 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.374 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:36.374 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:36.374 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.374 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:36.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.374 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.374 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:36.374 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:36.374 EAL: Ask a virtual area of 0x61000 bytes 00:05:36.374 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:36.374 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:36.374 EAL: Ask a virtual area of 0x400000000 bytes 00:05:36.374 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:36.374 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:36.374 EAL: Hugepages will be freed exactly as allocated. 00:05:36.374 EAL: No shared files mode enabled, IPC is disabled 00:05:36.374 EAL: No shared files mode enabled, IPC is disabled 00:05:36.375 EAL: TSC frequency is ~2290000 KHz 00:05:36.375 EAL: Main lcore 0 is ready (tid=7fae6ccc7a40;cpuset=[0]) 00:05:36.375 EAL: Trying to obtain current memory policy. 00:05:36.375 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.375 EAL: Restoring previous memory policy: 0 00:05:36.375 EAL: request: mp_malloc_sync 00:05:36.375 EAL: No shared files mode enabled, IPC is disabled 00:05:36.375 EAL: Heap on socket 0 was expanded by 2MB 00:05:36.375 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:36.375 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:36.375 EAL: Mem event callback 'spdk:(nil)' registered 00:05:36.375 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:36.375 00:05:36.375 00:05:36.375 CUnit - A unit testing framework for C - Version 2.1-3 00:05:36.375 http://cunit.sourceforge.net/ 00:05:36.375 00:05:36.375 00:05:36.375 Suite: components_suite 00:05:36.944 Test: vtophys_malloc_test ...passed 00:05:36.944 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:36.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.944 EAL: Restoring previous memory policy: 4 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was expanded by 4MB 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was shrunk by 4MB 00:05:36.944 EAL: Trying to obtain current memory policy. 00:05:36.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.944 EAL: Restoring previous memory policy: 4 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was expanded by 6MB 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was shrunk by 6MB 00:05:36.944 EAL: Trying to obtain current memory policy. 00:05:36.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.944 EAL: Restoring previous memory policy: 4 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was expanded by 10MB 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was shrunk by 10MB 00:05:36.944 EAL: Trying to obtain current memory policy. 00:05:36.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.944 EAL: Restoring previous memory policy: 4 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was expanded by 18MB 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was shrunk by 18MB 00:05:36.944 EAL: Trying to obtain current memory policy. 00:05:36.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.944 EAL: Restoring previous memory policy: 4 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was expanded by 34MB 00:05:36.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.944 EAL: request: mp_malloc_sync 00:05:36.944 EAL: No shared files mode enabled, IPC is disabled 00:05:36.944 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.204 EAL: Trying to obtain current memory policy. 00:05:37.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.204 EAL: Restoring previous memory policy: 4 00:05:37.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.204 EAL: request: mp_malloc_sync 00:05:37.204 EAL: No shared files mode enabled, IPC is disabled 00:05:37.204 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.204 EAL: request: mp_malloc_sync 00:05:37.204 EAL: No shared files mode enabled, IPC is disabled 00:05:37.204 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.463 EAL: Trying to obtain current memory policy. 00:05:37.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.463 EAL: Restoring previous memory policy: 4 00:05:37.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.463 EAL: request: mp_malloc_sync 00:05:37.463 EAL: No shared files mode enabled, IPC is disabled 00:05:37.463 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.722 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.722 EAL: request: mp_malloc_sync 00:05:37.722 EAL: No shared files mode enabled, IPC is disabled 00:05:37.722 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.982 EAL: Trying to obtain current memory policy. 00:05:37.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.982 EAL: Restoring previous memory policy: 4 00:05:37.982 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.982 EAL: request: mp_malloc_sync 00:05:37.982 EAL: No shared files mode enabled, IPC is disabled 00:05:37.982 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.557 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.557 EAL: request: mp_malloc_sync 00:05:38.557 EAL: No shared files mode enabled, IPC is disabled 00:05:38.557 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.124 EAL: Trying to obtain current memory policy. 00:05:39.124 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.124 EAL: Restoring previous memory policy: 4 00:05:39.124 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.124 EAL: request: mp_malloc_sync 00:05:39.124 EAL: No shared files mode enabled, IPC is disabled 00:05:39.124 EAL: Heap on socket 0 was expanded by 514MB 00:05:40.075 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.334 EAL: request: mp_malloc_sync 00:05:40.334 EAL: No shared files mode enabled, IPC is disabled 00:05:40.334 EAL: Heap on socket 0 was shrunk by 514MB 00:05:41.274 EAL: Trying to obtain current memory policy. 00:05:41.274 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.533 EAL: Restoring previous memory policy: 4 00:05:41.533 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.533 EAL: request: mp_malloc_sync 00:05:41.533 EAL: No shared files mode enabled, IPC is disabled 00:05:41.533 EAL: Heap on socket 0 was expanded by 1026MB 00:05:43.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.009 EAL: request: mp_malloc_sync 00:05:44.009 EAL: No shared files mode enabled, IPC is disabled 00:05:44.009 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:45.919 passed 00:05:45.919 00:05:45.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.919 suites 1 1 n/a 0 0 00:05:45.919 tests 2 2 2 0 0 00:05:45.919 asserts 5838 5838 5838 0 n/a 00:05:45.919 00:05:45.919 Elapsed time = 9.086 seconds 00:05:45.919 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.919 EAL: request: mp_malloc_sync 00:05:45.919 EAL: No shared files mode enabled, IPC is disabled 00:05:45.919 EAL: Heap on socket 0 was shrunk by 2MB 00:05:45.919 EAL: No shared files mode enabled, IPC is disabled 00:05:45.919 EAL: No shared files mode enabled, IPC is disabled 00:05:45.919 EAL: No shared files mode enabled, IPC is disabled 00:05:45.919 00:05:45.919 real 0m9.408s 00:05:45.919 user 0m8.028s 00:05:45.919 sys 0m1.222s 00:05:45.919 09:42:46 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.919 09:42:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 ************************************ 00:05:45.919 END TEST env_vtophys 00:05:45.919 ************************************ 00:05:45.919 09:42:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.919 09:42:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.919 09:42:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.919 09:42:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 ************************************ 00:05:45.919 START TEST env_pci 00:05:45.919 ************************************ 00:05:45.919 09:42:46 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:45.919 00:05:45.919 00:05:45.919 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.919 http://cunit.sourceforge.net/ 00:05:45.919 00:05:45.919 00:05:45.919 Suite: pci 00:05:45.919 Test: pci_hook ...[2024-11-27 09:42:46.689554] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56941 has claimed it 00:05:45.919 passed 00:05:45.919 00:05:45.919 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.919 suites 1 1 n/a 0 0 00:05:45.919 tests 1 1 1 0 0 00:05:45.919 asserts 25 25 25 0 n/a 00:05:45.919 00:05:45.919 Elapsed time = 0.006 seconds 00:05:45.919 EAL: Cannot find device (10000:00:01.0) 00:05:45.919 EAL: Failed to attach device on primary process 00:05:45.919 00:05:45.919 real 0m0.089s 00:05:45.919 user 0m0.034s 00:05:45.919 sys 0m0.054s 00:05:45.919 09:42:46 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.919 09:42:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 ************************************ 00:05:45.919 END TEST env_pci 00:05:45.919 ************************************ 00:05:45.919 09:42:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:45.919 09:42:46 env -- env/env.sh@15 -- # uname 00:05:45.919 09:42:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:45.919 09:42:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:45.919 09:42:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.919 09:42:46 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:45.919 09:42:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.919 09:42:46 env -- common/autotest_common.sh@10 -- # set +x 00:05:45.919 ************************************ 00:05:45.919 START TEST env_dpdk_post_init 00:05:45.919 ************************************ 00:05:45.919 09:42:46 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:45.919 EAL: Detected CPU lcores: 10 00:05:45.919 EAL: Detected NUMA nodes: 1 00:05:45.919 EAL: Detected shared linkage of DPDK 00:05:45.919 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.919 EAL: Selected IOVA mode 'PA' 00:05:45.919 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.920 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:45.920 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:46.179 Starting DPDK initialization... 00:05:46.179 Starting SPDK post initialization... 00:05:46.179 SPDK NVMe probe 00:05:46.179 Attaching to 0000:00:10.0 00:05:46.179 Attaching to 0000:00:11.0 00:05:46.179 Attached to 0000:00:10.0 00:05:46.179 Attached to 0000:00:11.0 00:05:46.179 Cleaning up... 00:05:46.179 00:05:46.179 real 0m0.286s 00:05:46.179 user 0m0.094s 00:05:46.179 sys 0m0.092s 00:05:46.179 09:42:47 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.179 09:42:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.179 ************************************ 00:05:46.179 END TEST env_dpdk_post_init 00:05:46.179 ************************************ 00:05:46.179 09:42:47 env -- env/env.sh@26 -- # uname 00:05:46.179 09:42:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:46.179 09:42:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.179 09:42:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.179 09:42:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.179 09:42:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.179 ************************************ 00:05:46.179 START TEST env_mem_callbacks 00:05:46.179 ************************************ 00:05:46.179 09:42:47 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:46.179 EAL: Detected CPU lcores: 10 00:05:46.179 EAL: Detected NUMA nodes: 1 00:05:46.179 EAL: Detected shared linkage of DPDK 00:05:46.179 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:46.179 EAL: Selected IOVA mode 'PA' 00:05:46.439 00:05:46.439 00:05:46.439 CUnit - A unit testing framework for C - Version 2.1-3 00:05:46.439 http://cunit.sourceforge.net/ 00:05:46.439 00:05:46.439 00:05:46.439 Suite: memory 00:05:46.439 Test: test ... 00:05:46.439 register 0x200000200000 2097152 00:05:46.439 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:46.439 malloc 3145728 00:05:46.439 register 0x200000400000 4194304 00:05:46.439 buf 0x2000004fffc0 len 3145728 PASSED 00:05:46.439 malloc 64 00:05:46.439 buf 0x2000004ffec0 len 64 PASSED 00:05:46.439 malloc 4194304 00:05:46.439 register 0x200000800000 6291456 00:05:46.439 buf 0x2000009fffc0 len 4194304 PASSED 00:05:46.439 free 0x2000004fffc0 3145728 00:05:46.439 free 0x2000004ffec0 64 00:05:46.439 unregister 0x200000400000 4194304 PASSED 00:05:46.439 free 0x2000009fffc0 4194304 00:05:46.439 unregister 0x200000800000 6291456 PASSED 00:05:46.439 malloc 8388608 00:05:46.439 register 0x200000400000 10485760 00:05:46.439 buf 0x2000005fffc0 len 8388608 PASSED 00:05:46.439 free 0x2000005fffc0 8388608 00:05:46.439 unregister 0x200000400000 10485760 PASSED 00:05:46.439 passed 00:05:46.439 00:05:46.439 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.439 suites 1 1 n/a 0 0 00:05:46.439 tests 1 1 1 0 0 00:05:46.439 asserts 15 15 15 0 n/a 00:05:46.439 00:05:46.439 Elapsed time = 0.091 seconds 00:05:46.439 00:05:46.439 real 0m0.288s 00:05:46.439 user 0m0.106s 00:05:46.439 sys 0m0.081s 00:05:46.439 09:42:47 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.439 ************************************ 00:05:46.439 END TEST env_mem_callbacks 00:05:46.439 ************************************ 00:05:46.439 09:42:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 00:05:46.439 real 0m10.917s 00:05:46.439 user 0m8.752s 00:05:46.439 sys 0m1.809s 00:05:46.439 09:42:47 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.439 09:42:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 ************************************ 00:05:46.439 END TEST env 00:05:46.439 ************************************ 00:05:46.439 09:42:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.439 09:42:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.439 09:42:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.439 09:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:46.439 ************************************ 00:05:46.439 START TEST rpc 00:05:46.439 ************************************ 00:05:46.439 09:42:47 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:46.699 * Looking for test storage... 00:05:46.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.699 09:42:47 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.699 09:42:47 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.699 09:42:47 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.699 09:42:47 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.699 09:42:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.699 09:42:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.699 09:42:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.699 09:42:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.699 09:42:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.699 09:42:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.699 09:42:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.699 09:42:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.699 09:42:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.699 09:42:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.699 09:42:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.699 09:42:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.700 09:42:47 rpc -- scripts/common.sh@345 -- # : 1 00:05:46.700 09:42:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.700 09:42:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.700 09:42:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.700 09:42:47 rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.700 09:42:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.700 09:42:47 rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.700 09:42:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.700 09:42:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.700 09:42:47 rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.700 09:42:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.700 09:42:47 rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.700 09:42:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.700 09:42:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.700 09:42:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.700 09:42:47 rpc -- scripts/common.sh@368 -- # return 0 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.700 --rc genhtml_branch_coverage=1 00:05:46.700 --rc genhtml_function_coverage=1 00:05:46.700 --rc genhtml_legend=1 00:05:46.700 --rc geninfo_all_blocks=1 00:05:46.700 --rc geninfo_unexecuted_blocks=1 00:05:46.700 00:05:46.700 ' 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.700 --rc genhtml_branch_coverage=1 00:05:46.700 --rc genhtml_function_coverage=1 00:05:46.700 --rc genhtml_legend=1 00:05:46.700 --rc geninfo_all_blocks=1 00:05:46.700 --rc geninfo_unexecuted_blocks=1 00:05:46.700 00:05:46.700 ' 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.700 --rc genhtml_branch_coverage=1 00:05:46.700 --rc genhtml_function_coverage=1 00:05:46.700 --rc genhtml_legend=1 00:05:46.700 --rc geninfo_all_blocks=1 00:05:46.700 --rc geninfo_unexecuted_blocks=1 00:05:46.700 00:05:46.700 ' 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.700 --rc genhtml_branch_coverage=1 00:05:46.700 --rc genhtml_function_coverage=1 00:05:46.700 --rc genhtml_legend=1 00:05:46.700 --rc geninfo_all_blocks=1 00:05:46.700 --rc geninfo_unexecuted_blocks=1 00:05:46.700 00:05:46.700 ' 00:05:46.700 09:42:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57069 00:05:46.700 09:42:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:46.700 09:42:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.700 09:42:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57069 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@835 -- # '[' -z 57069 ']' 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.700 09:42:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.960 [2024-11-27 09:42:47.876650] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:05:46.960 [2024-11-27 09:42:47.876795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57069 ] 00:05:46.960 [2024-11-27 09:42:48.059561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.220 [2024-11-27 09:42:48.199669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:47.220 [2024-11-27 09:42:48.199777] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57069' to capture a snapshot of events at runtime. 00:05:47.220 [2024-11-27 09:42:48.199788] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.220 [2024-11-27 09:42:48.199801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.220 [2024-11-27 09:42:48.199809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57069 for offline analysis/debug. 00:05:47.220 [2024-11-27 09:42:48.201190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.250 09:42:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.250 09:42:49 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.250 09:42:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.250 09:42:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.250 09:42:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:48.250 09:42:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:48.250 09:42:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.250 09:42:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.250 09:42:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.250 ************************************ 00:05:48.250 START TEST rpc_integrity 00:05:48.250 ************************************ 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.250 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:48.250 { 00:05:48.250 "name": "Malloc0", 00:05:48.250 "aliases": [ 00:05:48.250 "a4e295ef-133e-4b62-a050-192458558a50" 00:05:48.250 ], 00:05:48.250 "product_name": "Malloc disk", 00:05:48.250 "block_size": 512, 00:05:48.250 "num_blocks": 16384, 00:05:48.250 "uuid": "a4e295ef-133e-4b62-a050-192458558a50", 00:05:48.250 "assigned_rate_limits": { 00:05:48.250 "rw_ios_per_sec": 0, 00:05:48.250 "rw_mbytes_per_sec": 0, 00:05:48.250 "r_mbytes_per_sec": 0, 00:05:48.250 "w_mbytes_per_sec": 0 00:05:48.250 }, 00:05:48.250 "claimed": false, 00:05:48.250 "zoned": false, 00:05:48.250 "supported_io_types": { 00:05:48.250 "read": true, 00:05:48.250 "write": true, 00:05:48.250 "unmap": true, 00:05:48.250 "flush": true, 00:05:48.250 "reset": true, 00:05:48.250 "nvme_admin": false, 00:05:48.250 "nvme_io": false, 00:05:48.250 "nvme_io_md": false, 00:05:48.250 "write_zeroes": true, 00:05:48.250 "zcopy": true, 00:05:48.250 "get_zone_info": false, 00:05:48.250 "zone_management": false, 00:05:48.250 "zone_append": false, 00:05:48.250 "compare": false, 00:05:48.250 "compare_and_write": false, 00:05:48.250 "abort": true, 00:05:48.250 "seek_hole": false, 00:05:48.250 "seek_data": false, 00:05:48.250 "copy": true, 00:05:48.250 "nvme_iov_md": false 00:05:48.250 }, 00:05:48.250 "memory_domains": [ 00:05:48.250 { 00:05:48.250 "dma_device_id": "system", 00:05:48.250 "dma_device_type": 1 00:05:48.250 }, 00:05:48.250 { 00:05:48.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.250 "dma_device_type": 2 00:05:48.250 } 00:05:48.250 ], 00:05:48.250 "driver_specific": {} 00:05:48.250 } 00:05:48.250 ]' 00:05:48.250 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:48.509 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:48.509 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:48.509 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.509 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.509 [2024-11-27 09:42:49.421327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:48.509 [2024-11-27 09:42:49.421446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.509 [2024-11-27 09:42:49.421479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:48.509 [2024-11-27 09:42:49.421498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.509 [2024-11-27 09:42:49.424343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.509 [2024-11-27 09:42:49.424390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:48.509 Passthru0 00:05:48.509 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:48.510 { 00:05:48.510 "name": "Malloc0", 00:05:48.510 "aliases": [ 00:05:48.510 "a4e295ef-133e-4b62-a050-192458558a50" 00:05:48.510 ], 00:05:48.510 "product_name": "Malloc disk", 00:05:48.510 "block_size": 512, 00:05:48.510 "num_blocks": 16384, 00:05:48.510 "uuid": "a4e295ef-133e-4b62-a050-192458558a50", 00:05:48.510 "assigned_rate_limits": { 00:05:48.510 "rw_ios_per_sec": 0, 00:05:48.510 "rw_mbytes_per_sec": 0, 00:05:48.510 "r_mbytes_per_sec": 0, 00:05:48.510 "w_mbytes_per_sec": 0 00:05:48.510 }, 00:05:48.510 "claimed": true, 00:05:48.510 "claim_type": "exclusive_write", 00:05:48.510 "zoned": false, 00:05:48.510 "supported_io_types": { 00:05:48.510 "read": true, 00:05:48.510 "write": true, 00:05:48.510 "unmap": true, 00:05:48.510 "flush": true, 00:05:48.510 "reset": true, 00:05:48.510 "nvme_admin": false, 00:05:48.510 "nvme_io": false, 00:05:48.510 "nvme_io_md": false, 00:05:48.510 "write_zeroes": true, 00:05:48.510 "zcopy": true, 00:05:48.510 "get_zone_info": false, 00:05:48.510 "zone_management": false, 00:05:48.510 "zone_append": false, 00:05:48.510 "compare": false, 00:05:48.510 "compare_and_write": false, 00:05:48.510 "abort": true, 00:05:48.510 "seek_hole": false, 00:05:48.510 "seek_data": false, 00:05:48.510 "copy": true, 00:05:48.510 "nvme_iov_md": false 00:05:48.510 }, 00:05:48.510 "memory_domains": [ 00:05:48.510 { 00:05:48.510 "dma_device_id": "system", 00:05:48.510 "dma_device_type": 1 00:05:48.510 }, 00:05:48.510 { 00:05:48.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.510 "dma_device_type": 2 00:05:48.510 } 00:05:48.510 ], 00:05:48.510 "driver_specific": {} 00:05:48.510 }, 00:05:48.510 { 00:05:48.510 "name": "Passthru0", 00:05:48.510 "aliases": [ 00:05:48.510 "451d138b-5b10-511e-927a-b2ca13453c82" 00:05:48.510 ], 00:05:48.510 "product_name": "passthru", 00:05:48.510 "block_size": 512, 00:05:48.510 "num_blocks": 16384, 00:05:48.510 "uuid": "451d138b-5b10-511e-927a-b2ca13453c82", 00:05:48.510 "assigned_rate_limits": { 00:05:48.510 "rw_ios_per_sec": 0, 00:05:48.510 "rw_mbytes_per_sec": 0, 00:05:48.510 "r_mbytes_per_sec": 0, 00:05:48.510 "w_mbytes_per_sec": 0 00:05:48.510 }, 00:05:48.510 "claimed": false, 00:05:48.510 "zoned": false, 00:05:48.510 "supported_io_types": { 00:05:48.510 "read": true, 00:05:48.510 "write": true, 00:05:48.510 "unmap": true, 00:05:48.510 "flush": true, 00:05:48.510 "reset": true, 00:05:48.510 "nvme_admin": false, 00:05:48.510 "nvme_io": false, 00:05:48.510 "nvme_io_md": false, 00:05:48.510 "write_zeroes": true, 00:05:48.510 "zcopy": true, 00:05:48.510 "get_zone_info": false, 00:05:48.510 "zone_management": false, 00:05:48.510 "zone_append": false, 00:05:48.510 "compare": false, 00:05:48.510 "compare_and_write": false, 00:05:48.510 "abort": true, 00:05:48.510 "seek_hole": false, 00:05:48.510 "seek_data": false, 00:05:48.510 "copy": true, 00:05:48.510 "nvme_iov_md": false 00:05:48.510 }, 00:05:48.510 "memory_domains": [ 00:05:48.510 { 00:05:48.510 "dma_device_id": "system", 00:05:48.510 "dma_device_type": 1 00:05:48.510 }, 00:05:48.510 { 00:05:48.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.510 "dma_device_type": 2 00:05:48.510 } 00:05:48.510 ], 00:05:48.510 "driver_specific": { 00:05:48.510 "passthru": { 00:05:48.510 "name": "Passthru0", 00:05:48.510 "base_bdev_name": "Malloc0" 00:05:48.510 } 00:05:48.510 } 00:05:48.510 } 00:05:48.510 ]' 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:48.510 09:42:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:48.510 00:05:48.510 real 0m0.358s 00:05:48.510 user 0m0.188s 00:05:48.510 sys 0m0.062s 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.510 09:42:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 ************************************ 00:05:48.510 END TEST rpc_integrity 00:05:48.510 ************************************ 00:05:48.770 09:42:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:48.770 09:42:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.770 09:42:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.770 09:42:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 ************************************ 00:05:48.770 START TEST rpc_plugins 00:05:48.770 ************************************ 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:48.770 { 00:05:48.770 "name": "Malloc1", 00:05:48.770 "aliases": [ 00:05:48.770 "f336f45f-159b-47ec-b050-3bb54f8cc844" 00:05:48.770 ], 00:05:48.770 "product_name": "Malloc disk", 00:05:48.770 "block_size": 4096, 00:05:48.770 "num_blocks": 256, 00:05:48.770 "uuid": "f336f45f-159b-47ec-b050-3bb54f8cc844", 00:05:48.770 "assigned_rate_limits": { 00:05:48.770 "rw_ios_per_sec": 0, 00:05:48.770 "rw_mbytes_per_sec": 0, 00:05:48.770 "r_mbytes_per_sec": 0, 00:05:48.770 "w_mbytes_per_sec": 0 00:05:48.770 }, 00:05:48.770 "claimed": false, 00:05:48.770 "zoned": false, 00:05:48.770 "supported_io_types": { 00:05:48.770 "read": true, 00:05:48.770 "write": true, 00:05:48.770 "unmap": true, 00:05:48.770 "flush": true, 00:05:48.770 "reset": true, 00:05:48.770 "nvme_admin": false, 00:05:48.770 "nvme_io": false, 00:05:48.770 "nvme_io_md": false, 00:05:48.770 "write_zeroes": true, 00:05:48.770 "zcopy": true, 00:05:48.770 "get_zone_info": false, 00:05:48.770 "zone_management": false, 00:05:48.770 "zone_append": false, 00:05:48.770 "compare": false, 00:05:48.770 "compare_and_write": false, 00:05:48.770 "abort": true, 00:05:48.770 "seek_hole": false, 00:05:48.770 "seek_data": false, 00:05:48.770 "copy": true, 00:05:48.770 "nvme_iov_md": false 00:05:48.770 }, 00:05:48.770 "memory_domains": [ 00:05:48.770 { 00:05:48.770 "dma_device_id": "system", 00:05:48.770 "dma_device_type": 1 00:05:48.770 }, 00:05:48.770 { 00:05:48.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:48.770 "dma_device_type": 2 00:05:48.770 } 00:05:48.770 ], 00:05:48.770 "driver_specific": {} 00:05:48.770 } 00:05:48.770 ]' 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:48.770 09:42:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:48.770 00:05:48.770 real 0m0.178s 00:05:48.770 user 0m0.102s 00:05:48.771 sys 0m0.027s 00:05:48.771 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.771 09:42:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:48.771 ************************************ 00:05:48.771 END TEST rpc_plugins 00:05:48.771 ************************************ 00:05:49.031 09:42:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:49.031 09:42:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.031 09:42:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.031 09:42:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.031 ************************************ 00:05:49.031 START TEST rpc_trace_cmd_test 00:05:49.031 ************************************ 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:49.031 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57069", 00:05:49.031 "tpoint_group_mask": "0x8", 00:05:49.031 "iscsi_conn": { 00:05:49.031 "mask": "0x2", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "scsi": { 00:05:49.031 "mask": "0x4", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "bdev": { 00:05:49.031 "mask": "0x8", 00:05:49.031 "tpoint_mask": "0xffffffffffffffff" 00:05:49.031 }, 00:05:49.031 "nvmf_rdma": { 00:05:49.031 "mask": "0x10", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "nvmf_tcp": { 00:05:49.031 "mask": "0x20", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "ftl": { 00:05:49.031 "mask": "0x40", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "blobfs": { 00:05:49.031 "mask": "0x80", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "dsa": { 00:05:49.031 "mask": "0x200", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "thread": { 00:05:49.031 "mask": "0x400", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "nvme_pcie": { 00:05:49.031 "mask": "0x800", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "iaa": { 00:05:49.031 "mask": "0x1000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "nvme_tcp": { 00:05:49.031 "mask": "0x2000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "bdev_nvme": { 00:05:49.031 "mask": "0x4000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "sock": { 00:05:49.031 "mask": "0x8000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "blob": { 00:05:49.031 "mask": "0x10000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "bdev_raid": { 00:05:49.031 "mask": "0x20000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 }, 00:05:49.031 "scheduler": { 00:05:49.031 "mask": "0x40000", 00:05:49.031 "tpoint_mask": "0x0" 00:05:49.031 } 00:05:49.031 }' 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:49.031 09:42:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:49.031 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:49.031 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:49.031 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:49.031 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:49.031 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:49.031 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:49.301 09:42:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:49.301 00:05:49.301 real 0m0.250s 00:05:49.301 user 0m0.194s 00:05:49.301 sys 0m0.047s 00:05:49.301 09:42:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.301 09:42:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:49.301 ************************************ 00:05:49.301 END TEST rpc_trace_cmd_test 00:05:49.301 ************************************ 00:05:49.301 09:42:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:49.301 09:42:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:49.301 09:42:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:49.301 09:42:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.301 09:42:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.301 09:42:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.302 ************************************ 00:05:49.302 START TEST rpc_daemon_integrity 00:05:49.302 ************************************ 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.302 { 00:05:49.302 "name": "Malloc2", 00:05:49.302 "aliases": [ 00:05:49.302 "4ea2b15e-b3bb-4135-8c95-1286f3db38b1" 00:05:49.302 ], 00:05:49.302 "product_name": "Malloc disk", 00:05:49.302 "block_size": 512, 00:05:49.302 "num_blocks": 16384, 00:05:49.302 "uuid": "4ea2b15e-b3bb-4135-8c95-1286f3db38b1", 00:05:49.302 "assigned_rate_limits": { 00:05:49.302 "rw_ios_per_sec": 0, 00:05:49.302 "rw_mbytes_per_sec": 0, 00:05:49.302 "r_mbytes_per_sec": 0, 00:05:49.302 "w_mbytes_per_sec": 0 00:05:49.302 }, 00:05:49.302 "claimed": false, 00:05:49.302 "zoned": false, 00:05:49.302 "supported_io_types": { 00:05:49.302 "read": true, 00:05:49.302 "write": true, 00:05:49.302 "unmap": true, 00:05:49.302 "flush": true, 00:05:49.302 "reset": true, 00:05:49.302 "nvme_admin": false, 00:05:49.302 "nvme_io": false, 00:05:49.302 "nvme_io_md": false, 00:05:49.302 "write_zeroes": true, 00:05:49.302 "zcopy": true, 00:05:49.302 "get_zone_info": false, 00:05:49.302 "zone_management": false, 00:05:49.302 "zone_append": false, 00:05:49.302 "compare": false, 00:05:49.302 "compare_and_write": false, 00:05:49.302 "abort": true, 00:05:49.302 "seek_hole": false, 00:05:49.302 "seek_data": false, 00:05:49.302 "copy": true, 00:05:49.302 "nvme_iov_md": false 00:05:49.302 }, 00:05:49.302 "memory_domains": [ 00:05:49.302 { 00:05:49.302 "dma_device_id": "system", 00:05:49.302 "dma_device_type": 1 00:05:49.302 }, 00:05:49.302 { 00:05:49.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.302 "dma_device_type": 2 00:05:49.302 } 00:05:49.302 ], 00:05:49.302 "driver_specific": {} 00:05:49.302 } 00:05:49.302 ]' 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.302 [2024-11-27 09:42:50.404867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:49.302 [2024-11-27 09:42:50.404954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.302 [2024-11-27 09:42:50.404986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:49.302 [2024-11-27 09:42:50.405011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.302 [2024-11-27 09:42:50.407734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.302 [2024-11-27 09:42:50.407780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.302 Passthru0 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.302 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.563 { 00:05:49.563 "name": "Malloc2", 00:05:49.563 "aliases": [ 00:05:49.563 "4ea2b15e-b3bb-4135-8c95-1286f3db38b1" 00:05:49.563 ], 00:05:49.563 "product_name": "Malloc disk", 00:05:49.563 "block_size": 512, 00:05:49.563 "num_blocks": 16384, 00:05:49.563 "uuid": "4ea2b15e-b3bb-4135-8c95-1286f3db38b1", 00:05:49.563 "assigned_rate_limits": { 00:05:49.563 "rw_ios_per_sec": 0, 00:05:49.563 "rw_mbytes_per_sec": 0, 00:05:49.563 "r_mbytes_per_sec": 0, 00:05:49.563 "w_mbytes_per_sec": 0 00:05:49.563 }, 00:05:49.563 "claimed": true, 00:05:49.563 "claim_type": "exclusive_write", 00:05:49.563 "zoned": false, 00:05:49.563 "supported_io_types": { 00:05:49.563 "read": true, 00:05:49.563 "write": true, 00:05:49.563 "unmap": true, 00:05:49.563 "flush": true, 00:05:49.563 "reset": true, 00:05:49.563 "nvme_admin": false, 00:05:49.563 "nvme_io": false, 00:05:49.563 "nvme_io_md": false, 00:05:49.563 "write_zeroes": true, 00:05:49.563 "zcopy": true, 00:05:49.563 "get_zone_info": false, 00:05:49.563 "zone_management": false, 00:05:49.563 "zone_append": false, 00:05:49.563 "compare": false, 00:05:49.563 "compare_and_write": false, 00:05:49.563 "abort": true, 00:05:49.563 "seek_hole": false, 00:05:49.563 "seek_data": false, 00:05:49.563 "copy": true, 00:05:49.563 "nvme_iov_md": false 00:05:49.563 }, 00:05:49.563 "memory_domains": [ 00:05:49.563 { 00:05:49.563 "dma_device_id": "system", 00:05:49.563 "dma_device_type": 1 00:05:49.563 }, 00:05:49.563 { 00:05:49.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.563 "dma_device_type": 2 00:05:49.563 } 00:05:49.563 ], 00:05:49.563 "driver_specific": {} 00:05:49.563 }, 00:05:49.563 { 00:05:49.563 "name": "Passthru0", 00:05:49.563 "aliases": [ 00:05:49.563 "7d59241d-7aa3-56d8-a5b8-d282c61f8e3f" 00:05:49.563 ], 00:05:49.563 "product_name": "passthru", 00:05:49.563 "block_size": 512, 00:05:49.563 "num_blocks": 16384, 00:05:49.563 "uuid": "7d59241d-7aa3-56d8-a5b8-d282c61f8e3f", 00:05:49.563 "assigned_rate_limits": { 00:05:49.563 "rw_ios_per_sec": 0, 00:05:49.563 "rw_mbytes_per_sec": 0, 00:05:49.563 "r_mbytes_per_sec": 0, 00:05:49.563 "w_mbytes_per_sec": 0 00:05:49.563 }, 00:05:49.563 "claimed": false, 00:05:49.563 "zoned": false, 00:05:49.563 "supported_io_types": { 00:05:49.563 "read": true, 00:05:49.563 "write": true, 00:05:49.563 "unmap": true, 00:05:49.563 "flush": true, 00:05:49.563 "reset": true, 00:05:49.563 "nvme_admin": false, 00:05:49.563 "nvme_io": false, 00:05:49.563 "nvme_io_md": false, 00:05:49.563 "write_zeroes": true, 00:05:49.563 "zcopy": true, 00:05:49.563 "get_zone_info": false, 00:05:49.563 "zone_management": false, 00:05:49.563 "zone_append": false, 00:05:49.563 "compare": false, 00:05:49.563 "compare_and_write": false, 00:05:49.563 "abort": true, 00:05:49.563 "seek_hole": false, 00:05:49.563 "seek_data": false, 00:05:49.563 "copy": true, 00:05:49.563 "nvme_iov_md": false 00:05:49.563 }, 00:05:49.563 "memory_domains": [ 00:05:49.563 { 00:05:49.563 "dma_device_id": "system", 00:05:49.563 "dma_device_type": 1 00:05:49.563 }, 00:05:49.563 { 00:05:49.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.563 "dma_device_type": 2 00:05:49.563 } 00:05:49.563 ], 00:05:49.563 "driver_specific": { 00:05:49.563 "passthru": { 00:05:49.563 "name": "Passthru0", 00:05:49.563 "base_bdev_name": "Malloc2" 00:05:49.563 } 00:05:49.563 } 00:05:49.563 } 00:05:49.563 ]' 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.563 00:05:49.563 real 0m0.362s 00:05:49.563 user 0m0.192s 00:05:49.563 sys 0m0.061s 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.563 09:42:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.563 ************************************ 00:05:49.563 END TEST rpc_daemon_integrity 00:05:49.563 ************************************ 00:05:49.563 09:42:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:49.563 09:42:50 rpc -- rpc/rpc.sh@84 -- # killprocess 57069 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@954 -- # '[' -z 57069 ']' 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@958 -- # kill -0 57069 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@959 -- # uname 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57069 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.563 killing process with pid 57069 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57069' 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@973 -- # kill 57069 00:05:49.563 09:42:50 rpc -- common/autotest_common.sh@978 -- # wait 57069 00:05:52.857 00:05:52.857 real 0m5.810s 00:05:52.857 user 0m6.206s 00:05:52.857 sys 0m1.146s 00:05:52.857 09:42:53 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.857 09:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.857 ************************************ 00:05:52.857 END TEST rpc 00:05:52.857 ************************************ 00:05:52.857 09:42:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.857 09:42:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.857 09:42:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.857 09:42:53 -- common/autotest_common.sh@10 -- # set +x 00:05:52.857 ************************************ 00:05:52.857 START TEST skip_rpc 00:05:52.857 ************************************ 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:52.857 * Looking for test storage... 00:05:52.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.857 09:42:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:52.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.857 --rc genhtml_branch_coverage=1 00:05:52.857 --rc genhtml_function_coverage=1 00:05:52.857 --rc genhtml_legend=1 00:05:52.857 --rc geninfo_all_blocks=1 00:05:52.857 --rc geninfo_unexecuted_blocks=1 00:05:52.857 00:05:52.857 ' 00:05:52.857 09:42:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:52.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.857 --rc genhtml_branch_coverage=1 00:05:52.857 --rc genhtml_function_coverage=1 00:05:52.857 --rc genhtml_legend=1 00:05:52.857 --rc geninfo_all_blocks=1 00:05:52.858 --rc geninfo_unexecuted_blocks=1 00:05:52.858 00:05:52.858 ' 00:05:52.858 09:42:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:52.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.858 --rc genhtml_branch_coverage=1 00:05:52.858 --rc genhtml_function_coverage=1 00:05:52.858 --rc genhtml_legend=1 00:05:52.858 --rc geninfo_all_blocks=1 00:05:52.858 --rc geninfo_unexecuted_blocks=1 00:05:52.858 00:05:52.858 ' 00:05:52.858 09:42:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:52.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.858 --rc genhtml_branch_coverage=1 00:05:52.858 --rc genhtml_function_coverage=1 00:05:52.858 --rc genhtml_legend=1 00:05:52.858 --rc geninfo_all_blocks=1 00:05:52.858 --rc geninfo_unexecuted_blocks=1 00:05:52.858 00:05:52.858 ' 00:05:52.858 09:42:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:52.858 09:42:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:52.858 09:42:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:52.858 09:42:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.858 09:42:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.858 09:42:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.858 ************************************ 00:05:52.858 START TEST skip_rpc 00:05:52.858 ************************************ 00:05:52.858 09:42:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:52.858 09:42:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57303 00:05:52.858 09:42:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:52.858 09:42:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.858 09:42:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:52.858 [2024-11-27 09:42:53.757472] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:05:52.858 [2024-11-27 09:42:53.757652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57303 ] 00:05:52.858 [2024-11-27 09:42:53.938993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.118 [2024-11-27 09:42:54.085631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:58.424 09:42:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57303 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57303 ']' 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57303 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57303 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.425 killing process with pid 57303 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57303' 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57303 00:05:58.425 09:42:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57303 00:06:00.335 00:06:00.335 real 0m7.734s 00:06:00.335 user 0m7.088s 00:06:00.335 sys 0m0.563s 00:06:00.335 09:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.335 09:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.335 ************************************ 00:06:00.335 END TEST skip_rpc 00:06:00.335 ************************************ 00:06:00.335 09:43:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:00.335 09:43:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.335 09:43:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.335 09:43:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.335 ************************************ 00:06:00.335 START TEST skip_rpc_with_json 00:06:00.335 ************************************ 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57418 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57418 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57418 ']' 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.335 09:43:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 [2024-11-27 09:43:01.554616] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:00.595 [2024-11-27 09:43:01.554803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57418 ] 00:06:00.595 [2024-11-27 09:43:01.722037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.855 [2024-11-27 09:43:01.858152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.796 [2024-11-27 09:43:02.876294] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:01.796 request: 00:06:01.796 { 00:06:01.796 "trtype": "tcp", 00:06:01.796 "method": "nvmf_get_transports", 00:06:01.796 "req_id": 1 00:06:01.796 } 00:06:01.796 Got JSON-RPC error response 00:06:01.796 response: 00:06:01.796 { 00:06:01.796 "code": -19, 00:06:01.796 "message": "No such device" 00:06:01.796 } 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:01.796 [2024-11-27 09:43:02.888394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.796 09:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.057 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.057 09:43:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.057 { 00:06:02.057 "subsystems": [ 00:06:02.057 { 00:06:02.057 "subsystem": "fsdev", 00:06:02.057 "config": [ 00:06:02.057 { 00:06:02.057 "method": "fsdev_set_opts", 00:06:02.057 "params": { 00:06:02.057 "fsdev_io_pool_size": 65535, 00:06:02.057 "fsdev_io_cache_size": 256 00:06:02.057 } 00:06:02.057 } 00:06:02.057 ] 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "subsystem": "keyring", 00:06:02.057 "config": [] 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "subsystem": "iobuf", 00:06:02.057 "config": [ 00:06:02.057 { 00:06:02.057 "method": "iobuf_set_options", 00:06:02.057 "params": { 00:06:02.057 "small_pool_count": 8192, 00:06:02.057 "large_pool_count": 1024, 00:06:02.057 "small_bufsize": 8192, 00:06:02.057 "large_bufsize": 135168, 00:06:02.057 "enable_numa": false 00:06:02.057 } 00:06:02.057 } 00:06:02.057 ] 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "subsystem": "sock", 00:06:02.057 "config": [ 00:06:02.057 { 00:06:02.057 "method": "sock_set_default_impl", 00:06:02.057 "params": { 00:06:02.057 "impl_name": "posix" 00:06:02.057 } 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "method": "sock_impl_set_options", 00:06:02.057 "params": { 00:06:02.057 "impl_name": "ssl", 00:06:02.057 "recv_buf_size": 4096, 00:06:02.057 "send_buf_size": 4096, 00:06:02.057 "enable_recv_pipe": true, 00:06:02.057 "enable_quickack": false, 00:06:02.057 "enable_placement_id": 0, 00:06:02.057 "enable_zerocopy_send_server": true, 00:06:02.057 "enable_zerocopy_send_client": false, 00:06:02.057 "zerocopy_threshold": 0, 00:06:02.057 "tls_version": 0, 00:06:02.057 "enable_ktls": false 00:06:02.057 } 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "method": "sock_impl_set_options", 00:06:02.057 "params": { 00:06:02.057 "impl_name": "posix", 00:06:02.057 "recv_buf_size": 2097152, 00:06:02.057 "send_buf_size": 2097152, 00:06:02.057 "enable_recv_pipe": true, 00:06:02.057 "enable_quickack": false, 00:06:02.057 "enable_placement_id": 0, 00:06:02.057 "enable_zerocopy_send_server": true, 00:06:02.057 "enable_zerocopy_send_client": false, 00:06:02.057 "zerocopy_threshold": 0, 00:06:02.057 "tls_version": 0, 00:06:02.057 "enable_ktls": false 00:06:02.057 } 00:06:02.057 } 00:06:02.057 ] 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "subsystem": "vmd", 00:06:02.057 "config": [] 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "subsystem": "accel", 00:06:02.057 "config": [ 00:06:02.057 { 00:06:02.057 "method": "accel_set_options", 00:06:02.057 "params": { 00:06:02.057 "small_cache_size": 128, 00:06:02.057 "large_cache_size": 16, 00:06:02.057 "task_count": 2048, 00:06:02.057 "sequence_count": 2048, 00:06:02.057 "buf_count": 2048 00:06:02.057 } 00:06:02.057 } 00:06:02.057 ] 00:06:02.057 }, 00:06:02.057 { 00:06:02.057 "subsystem": "bdev", 00:06:02.057 "config": [ 00:06:02.057 { 00:06:02.057 "method": "bdev_set_options", 00:06:02.057 "params": { 00:06:02.057 "bdev_io_pool_size": 65535, 00:06:02.057 "bdev_io_cache_size": 256, 00:06:02.058 "bdev_auto_examine": true, 00:06:02.058 "iobuf_small_cache_size": 128, 00:06:02.058 "iobuf_large_cache_size": 16 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "bdev_raid_set_options", 00:06:02.058 "params": { 00:06:02.058 "process_window_size_kb": 1024, 00:06:02.058 "process_max_bandwidth_mb_sec": 0 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "bdev_iscsi_set_options", 00:06:02.058 "params": { 00:06:02.058 "timeout_sec": 30 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "bdev_nvme_set_options", 00:06:02.058 "params": { 00:06:02.058 "action_on_timeout": "none", 00:06:02.058 "timeout_us": 0, 00:06:02.058 "timeout_admin_us": 0, 00:06:02.058 "keep_alive_timeout_ms": 10000, 00:06:02.058 "arbitration_burst": 0, 00:06:02.058 "low_priority_weight": 0, 00:06:02.058 "medium_priority_weight": 0, 00:06:02.058 "high_priority_weight": 0, 00:06:02.058 "nvme_adminq_poll_period_us": 10000, 00:06:02.058 "nvme_ioq_poll_period_us": 0, 00:06:02.058 "io_queue_requests": 0, 00:06:02.058 "delay_cmd_submit": true, 00:06:02.058 "transport_retry_count": 4, 00:06:02.058 "bdev_retry_count": 3, 00:06:02.058 "transport_ack_timeout": 0, 00:06:02.058 "ctrlr_loss_timeout_sec": 0, 00:06:02.058 "reconnect_delay_sec": 0, 00:06:02.058 "fast_io_fail_timeout_sec": 0, 00:06:02.058 "disable_auto_failback": false, 00:06:02.058 "generate_uuids": false, 00:06:02.058 "transport_tos": 0, 00:06:02.058 "nvme_error_stat": false, 00:06:02.058 "rdma_srq_size": 0, 00:06:02.058 "io_path_stat": false, 00:06:02.058 "allow_accel_sequence": false, 00:06:02.058 "rdma_max_cq_size": 0, 00:06:02.058 "rdma_cm_event_timeout_ms": 0, 00:06:02.058 "dhchap_digests": [ 00:06:02.058 "sha256", 00:06:02.058 "sha384", 00:06:02.058 "sha512" 00:06:02.058 ], 00:06:02.058 "dhchap_dhgroups": [ 00:06:02.058 "null", 00:06:02.058 "ffdhe2048", 00:06:02.058 "ffdhe3072", 00:06:02.058 "ffdhe4096", 00:06:02.058 "ffdhe6144", 00:06:02.058 "ffdhe8192" 00:06:02.058 ] 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "bdev_nvme_set_hotplug", 00:06:02.058 "params": { 00:06:02.058 "period_us": 100000, 00:06:02.058 "enable": false 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "bdev_wait_for_examine" 00:06:02.058 } 00:06:02.058 ] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "scsi", 00:06:02.058 "config": null 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "scheduler", 00:06:02.058 "config": [ 00:06:02.058 { 00:06:02.058 "method": "framework_set_scheduler", 00:06:02.058 "params": { 00:06:02.058 "name": "static" 00:06:02.058 } 00:06:02.058 } 00:06:02.058 ] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "vhost_scsi", 00:06:02.058 "config": [] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "vhost_blk", 00:06:02.058 "config": [] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "ublk", 00:06:02.058 "config": [] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "nbd", 00:06:02.058 "config": [] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "nvmf", 00:06:02.058 "config": [ 00:06:02.058 { 00:06:02.058 "method": "nvmf_set_config", 00:06:02.058 "params": { 00:06:02.058 "discovery_filter": "match_any", 00:06:02.058 "admin_cmd_passthru": { 00:06:02.058 "identify_ctrlr": false 00:06:02.058 }, 00:06:02.058 "dhchap_digests": [ 00:06:02.058 "sha256", 00:06:02.058 "sha384", 00:06:02.058 "sha512" 00:06:02.058 ], 00:06:02.058 "dhchap_dhgroups": [ 00:06:02.058 "null", 00:06:02.058 "ffdhe2048", 00:06:02.058 "ffdhe3072", 00:06:02.058 "ffdhe4096", 00:06:02.058 "ffdhe6144", 00:06:02.058 "ffdhe8192" 00:06:02.058 ] 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "nvmf_set_max_subsystems", 00:06:02.058 "params": { 00:06:02.058 "max_subsystems": 1024 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "nvmf_set_crdt", 00:06:02.058 "params": { 00:06:02.058 "crdt1": 0, 00:06:02.058 "crdt2": 0, 00:06:02.058 "crdt3": 0 00:06:02.058 } 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "method": "nvmf_create_transport", 00:06:02.058 "params": { 00:06:02.058 "trtype": "TCP", 00:06:02.058 "max_queue_depth": 128, 00:06:02.058 "max_io_qpairs_per_ctrlr": 127, 00:06:02.058 "in_capsule_data_size": 4096, 00:06:02.058 "max_io_size": 131072, 00:06:02.058 "io_unit_size": 131072, 00:06:02.058 "max_aq_depth": 128, 00:06:02.058 "num_shared_buffers": 511, 00:06:02.058 "buf_cache_size": 4294967295, 00:06:02.058 "dif_insert_or_strip": false, 00:06:02.058 "zcopy": false, 00:06:02.058 "c2h_success": true, 00:06:02.058 "sock_priority": 0, 00:06:02.058 "abort_timeout_sec": 1, 00:06:02.058 "ack_timeout": 0, 00:06:02.058 "data_wr_pool_size": 0 00:06:02.058 } 00:06:02.058 } 00:06:02.058 ] 00:06:02.058 }, 00:06:02.058 { 00:06:02.058 "subsystem": "iscsi", 00:06:02.058 "config": [ 00:06:02.058 { 00:06:02.058 "method": "iscsi_set_options", 00:06:02.058 "params": { 00:06:02.058 "node_base": "iqn.2016-06.io.spdk", 00:06:02.058 "max_sessions": 128, 00:06:02.058 "max_connections_per_session": 2, 00:06:02.058 "max_queue_depth": 64, 00:06:02.058 "default_time2wait": 2, 00:06:02.058 "default_time2retain": 20, 00:06:02.058 "first_burst_length": 8192, 00:06:02.058 "immediate_data": true, 00:06:02.058 "allow_duplicated_isid": false, 00:06:02.058 "error_recovery_level": 0, 00:06:02.058 "nop_timeout": 60, 00:06:02.058 "nop_in_interval": 30, 00:06:02.058 "disable_chap": false, 00:06:02.058 "require_chap": false, 00:06:02.059 "mutual_chap": false, 00:06:02.059 "chap_group": 0, 00:06:02.059 "max_large_datain_per_connection": 64, 00:06:02.059 "max_r2t_per_connection": 4, 00:06:02.059 "pdu_pool_size": 36864, 00:06:02.059 "immediate_data_pool_size": 16384, 00:06:02.059 "data_out_pool_size": 2048 00:06:02.059 } 00:06:02.059 } 00:06:02.059 ] 00:06:02.059 } 00:06:02.059 ] 00:06:02.059 } 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57418 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57418 ']' 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57418 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57418 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.059 killing process with pid 57418 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57418' 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57418 00:06:02.059 09:43:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57418 00:06:05.353 09:43:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57474 00:06:05.353 09:43:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:05.353 09:43:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57474 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57474 ']' 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57474 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57474 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.635 killing process with pid 57474 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57474' 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57474 00:06:10.635 09:43:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57474 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.541 00:06:12.541 real 0m12.023s 00:06:12.541 user 0m11.132s 00:06:12.541 sys 0m1.212s 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.541 ************************************ 00:06:12.541 END TEST skip_rpc_with_json 00:06:12.541 ************************************ 00:06:12.541 09:43:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:12.541 09:43:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.541 09:43:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.541 09:43:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.541 ************************************ 00:06:12.541 START TEST skip_rpc_with_delay 00:06:12.541 ************************************ 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:12.541 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.541 [2024-11-27 09:43:13.649469] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:12.800 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:12.800 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.800 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.800 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.800 00:06:12.800 real 0m0.184s 00:06:12.800 user 0m0.089s 00:06:12.800 sys 0m0.093s 00:06:12.800 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.800 09:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 ************************************ 00:06:12.800 END TEST skip_rpc_with_delay 00:06:12.800 ************************************ 00:06:12.800 09:43:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:12.800 09:43:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:12.800 09:43:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:12.800 09:43:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.800 09:43:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.800 09:43:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.800 ************************************ 00:06:12.800 START TEST exit_on_failed_rpc_init 00:06:12.800 ************************************ 00:06:12.800 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57608 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57608 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57608 ']' 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.801 09:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.801 [2024-11-27 09:43:13.902095] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:12.801 [2024-11-27 09:43:13.902222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57608 ] 00:06:13.059 [2024-11-27 09:43:14.083087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.318 [2024-11-27 09:43:14.216746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:14.254 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:14.255 [2024-11-27 09:43:15.354141] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:14.255 [2024-11-27 09:43:15.354269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57631 ] 00:06:14.514 [2024-11-27 09:43:15.533558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.774 [2024-11-27 09:43:15.678475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.774 [2024-11-27 09:43:15.678615] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:14.774 [2024-11-27 09:43:15.678631] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:14.774 [2024-11-27 09:43:15.678651] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57608 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57608 ']' 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57608 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.045 09:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57608 00:06:15.045 09:43:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.045 09:43:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.045 killing process with pid 57608 00:06:15.045 09:43:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57608' 00:06:15.045 09:43:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57608 00:06:15.045 09:43:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57608 00:06:17.576 00:06:17.576 real 0m4.882s 00:06:17.576 user 0m5.042s 00:06:17.576 sys 0m0.796s 00:06:17.576 09:43:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.576 09:43:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.576 END TEST exit_on_failed_rpc_init 00:06:17.576 ************************************ 00:06:17.835 09:43:18 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:17.835 00:06:17.835 real 0m25.306s 00:06:17.835 user 0m23.534s 00:06:17.835 sys 0m2.985s 00:06:17.835 09:43:18 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.835 09:43:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.835 ************************************ 00:06:17.835 END TEST skip_rpc 00:06:17.835 ************************************ 00:06:17.835 09:43:18 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:17.835 09:43:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.835 09:43:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.835 09:43:18 -- common/autotest_common.sh@10 -- # set +x 00:06:17.835 ************************************ 00:06:17.835 START TEST rpc_client 00:06:17.835 ************************************ 00:06:17.835 09:43:18 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:17.835 * Looking for test storage... 00:06:17.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:17.835 09:43:18 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.835 09:43:18 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.835 09:43:18 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.094 09:43:18 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:18.094 09:43:18 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.094 09:43:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.094 --rc genhtml_branch_coverage=1 00:06:18.094 --rc genhtml_function_coverage=1 00:06:18.094 --rc genhtml_legend=1 00:06:18.094 --rc geninfo_all_blocks=1 00:06:18.094 --rc geninfo_unexecuted_blocks=1 00:06:18.094 00:06:18.094 ' 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.094 --rc genhtml_branch_coverage=1 00:06:18.094 --rc genhtml_function_coverage=1 00:06:18.094 --rc genhtml_legend=1 00:06:18.094 --rc geninfo_all_blocks=1 00:06:18.094 --rc geninfo_unexecuted_blocks=1 00:06:18.094 00:06:18.094 ' 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.094 --rc genhtml_branch_coverage=1 00:06:18.094 --rc genhtml_function_coverage=1 00:06:18.094 --rc genhtml_legend=1 00:06:18.094 --rc geninfo_all_blocks=1 00:06:18.094 --rc geninfo_unexecuted_blocks=1 00:06:18.094 00:06:18.094 ' 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.094 --rc genhtml_branch_coverage=1 00:06:18.094 --rc genhtml_function_coverage=1 00:06:18.094 --rc genhtml_legend=1 00:06:18.094 --rc geninfo_all_blocks=1 00:06:18.094 --rc geninfo_unexecuted_blocks=1 00:06:18.094 00:06:18.094 ' 00:06:18.094 09:43:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:18.094 OK 00:06:18.094 09:43:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:18.094 00:06:18.094 real 0m0.296s 00:06:18.094 user 0m0.154s 00:06:18.094 sys 0m0.160s 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.094 09:43:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 END TEST rpc_client 00:06:18.094 ************************************ 00:06:18.094 09:43:19 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:18.094 09:43:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.094 09:43:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.094 09:43:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.094 ************************************ 00:06:18.094 START TEST json_config 00:06:18.094 ************************************ 00:06:18.094 09:43:19 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.353 09:43:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.353 09:43:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.353 09:43:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.353 09:43:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.353 09:43:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.353 09:43:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:18.353 09:43:19 json_config -- scripts/common.sh@345 -- # : 1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.353 09:43:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.353 09:43:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@353 -- # local d=1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.353 09:43:19 json_config -- scripts/common.sh@355 -- # echo 1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.353 09:43:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@353 -- # local d=2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.353 09:43:19 json_config -- scripts/common.sh@355 -- # echo 2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.353 09:43:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.353 09:43:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.353 09:43:19 json_config -- scripts/common.sh@368 -- # return 0 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.353 --rc genhtml_branch_coverage=1 00:06:18.353 --rc genhtml_function_coverage=1 00:06:18.353 --rc genhtml_legend=1 00:06:18.353 --rc geninfo_all_blocks=1 00:06:18.353 --rc geninfo_unexecuted_blocks=1 00:06:18.353 00:06:18.353 ' 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.353 --rc genhtml_branch_coverage=1 00:06:18.353 --rc genhtml_function_coverage=1 00:06:18.353 --rc genhtml_legend=1 00:06:18.353 --rc geninfo_all_blocks=1 00:06:18.353 --rc geninfo_unexecuted_blocks=1 00:06:18.353 00:06:18.353 ' 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.353 --rc genhtml_branch_coverage=1 00:06:18.353 --rc genhtml_function_coverage=1 00:06:18.353 --rc genhtml_legend=1 00:06:18.353 --rc geninfo_all_blocks=1 00:06:18.353 --rc geninfo_unexecuted_blocks=1 00:06:18.353 00:06:18.353 ' 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.353 --rc genhtml_branch_coverage=1 00:06:18.353 --rc genhtml_function_coverage=1 00:06:18.353 --rc genhtml_legend=1 00:06:18.353 --rc geninfo_all_blocks=1 00:06:18.353 --rc geninfo_unexecuted_blocks=1 00:06:18.353 00:06:18.353 ' 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:730d5460-0697-4866-b4bd-cde3bf211b9d 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=730d5460-0697-4866-b4bd-cde3bf211b9d 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.353 09:43:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.353 09:43:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.353 09:43:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.353 09:43:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.353 09:43:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.353 09:43:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.353 09:43:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.353 09:43:19 json_config -- paths/export.sh@5 -- # export PATH 00:06:18.353 09:43:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@51 -- # : 0 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.353 09:43:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:18.353 WARNING: No tests are enabled so not running JSON configuration tests 00:06:18.353 09:43:19 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:18.353 00:06:18.353 real 0m0.233s 00:06:18.353 user 0m0.140s 00:06:18.353 sys 0m0.096s 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.353 09:43:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:18.353 ************************************ 00:06:18.353 END TEST json_config 00:06:18.353 ************************************ 00:06:18.353 09:43:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:18.353 09:43:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.353 09:43:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.353 09:43:19 -- common/autotest_common.sh@10 -- # set +x 00:06:18.353 ************************************ 00:06:18.353 START TEST json_config_extra_key 00:06:18.353 ************************************ 00:06:18.353 09:43:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:18.612 09:43:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.612 09:43:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.612 09:43:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.612 09:43:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.612 09:43:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:18.612 09:43:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.612 09:43:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.612 --rc genhtml_branch_coverage=1 00:06:18.612 --rc genhtml_function_coverage=1 00:06:18.612 --rc genhtml_legend=1 00:06:18.612 --rc geninfo_all_blocks=1 00:06:18.612 --rc geninfo_unexecuted_blocks=1 00:06:18.612 00:06:18.613 ' 00:06:18.613 09:43:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.613 --rc genhtml_branch_coverage=1 00:06:18.613 --rc genhtml_function_coverage=1 00:06:18.613 --rc genhtml_legend=1 00:06:18.613 --rc geninfo_all_blocks=1 00:06:18.613 --rc geninfo_unexecuted_blocks=1 00:06:18.613 00:06:18.613 ' 00:06:18.613 09:43:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.613 --rc genhtml_branch_coverage=1 00:06:18.613 --rc genhtml_function_coverage=1 00:06:18.613 --rc genhtml_legend=1 00:06:18.613 --rc geninfo_all_blocks=1 00:06:18.613 --rc geninfo_unexecuted_blocks=1 00:06:18.613 00:06:18.613 ' 00:06:18.613 09:43:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.613 --rc genhtml_branch_coverage=1 00:06:18.613 --rc genhtml_function_coverage=1 00:06:18.613 --rc genhtml_legend=1 00:06:18.613 --rc geninfo_all_blocks=1 00:06:18.613 --rc geninfo_unexecuted_blocks=1 00:06:18.613 00:06:18.613 ' 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:730d5460-0697-4866-b4bd-cde3bf211b9d 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=730d5460-0697-4866-b4bd-cde3bf211b9d 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.613 09:43:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.613 09:43:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.613 09:43:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.613 09:43:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.613 09:43:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.613 09:43:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.613 09:43:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.613 09:43:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:18.613 09:43:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.613 09:43:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:18.613 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:18.614 INFO: launching applications... 00:06:18.614 09:43:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:18.614 Waiting for target to run... 00:06:18.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57841 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57841 /var/tmp/spdk_tgt.sock 00:06:18.614 09:43:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57841 ']' 00:06:18.614 09:43:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:18.614 09:43:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.614 09:43:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:18.614 09:43:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.614 09:43:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:18.614 09:43:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.873 [2024-11-27 09:43:19.789463] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:18.873 [2024-11-27 09:43:19.789652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57841 ] 00:06:19.442 [2024-11-27 09:43:20.368216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.442 [2024-11-27 09:43:20.491535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.379 00:06:20.379 INFO: shutting down applications... 00:06:20.379 09:43:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.379 09:43:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:20.379 09:43:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:20.379 09:43:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57841 ]] 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57841 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:20.379 09:43:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.947 09:43:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.947 09:43:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.947 09:43:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:20.947 09:43:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.205 09:43:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.205 09:43:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.206 09:43:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:21.206 09:43:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.774 09:43:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.774 09:43:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.774 09:43:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:21.774 09:43:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.343 09:43:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.343 09:43:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.343 09:43:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:22.343 09:43:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.913 09:43:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.913 09:43:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.913 09:43:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:22.913 09:43:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.480 09:43:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.480 09:43:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.480 09:43:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:23.480 09:43:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:23.739 SPDK target shutdown done 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57841 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:23.739 09:43:24 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:23.739 09:43:24 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:23.739 Success 00:06:23.739 00:06:23.739 real 0m5.372s 00:06:23.739 user 0m4.461s 00:06:23.739 sys 0m0.822s 00:06:23.739 09:43:24 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.739 09:43:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.739 ************************************ 00:06:23.739 END TEST json_config_extra_key 00:06:23.739 ************************************ 00:06:23.999 09:43:24 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.999 09:43:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.999 09:43:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.999 09:43:24 -- common/autotest_common.sh@10 -- # set +x 00:06:23.999 ************************************ 00:06:23.999 START TEST alias_rpc 00:06:23.999 ************************************ 00:06:23.999 09:43:24 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:23.999 * Looking for test storage... 00:06:23.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.999 09:43:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.999 --rc genhtml_branch_coverage=1 00:06:23.999 --rc genhtml_function_coverage=1 00:06:23.999 --rc genhtml_legend=1 00:06:23.999 --rc geninfo_all_blocks=1 00:06:23.999 --rc geninfo_unexecuted_blocks=1 00:06:23.999 00:06:23.999 ' 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.999 --rc genhtml_branch_coverage=1 00:06:23.999 --rc genhtml_function_coverage=1 00:06:23.999 --rc genhtml_legend=1 00:06:23.999 --rc geninfo_all_blocks=1 00:06:23.999 --rc geninfo_unexecuted_blocks=1 00:06:23.999 00:06:23.999 ' 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.999 --rc genhtml_branch_coverage=1 00:06:23.999 --rc genhtml_function_coverage=1 00:06:23.999 --rc genhtml_legend=1 00:06:23.999 --rc geninfo_all_blocks=1 00:06:23.999 --rc geninfo_unexecuted_blocks=1 00:06:23.999 00:06:23.999 ' 00:06:23.999 09:43:25 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.999 --rc genhtml_branch_coverage=1 00:06:23.999 --rc genhtml_function_coverage=1 00:06:23.999 --rc genhtml_legend=1 00:06:23.999 --rc geninfo_all_blocks=1 00:06:23.999 --rc geninfo_unexecuted_blocks=1 00:06:23.999 00:06:23.999 ' 00:06:23.999 09:43:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:23.999 09:43:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57965 00:06:23.999 09:43:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.000 09:43:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57965 00:06:24.000 09:43:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57965 ']' 00:06:24.000 09:43:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.000 09:43:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.000 09:43:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.000 09:43:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.000 09:43:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.260 [2024-11-27 09:43:25.216580] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:24.260 [2024-11-27 09:43:25.217211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57965 ] 00:06:24.520 [2024-11-27 09:43:25.401379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.520 [2024-11-27 09:43:25.545066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.461 09:43:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.461 09:43:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.461 09:43:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:25.721 09:43:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57965 00:06:25.721 09:43:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57965 ']' 00:06:25.721 09:43:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57965 00:06:25.721 09:43:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:25.721 09:43:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.721 09:43:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57965 00:06:25.980 killing process with pid 57965 00:06:25.980 09:43:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.980 09:43:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.980 09:43:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57965' 00:06:25.980 09:43:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57965 00:06:25.980 09:43:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57965 00:06:28.547 00:06:28.547 real 0m4.663s 00:06:28.547 user 0m4.483s 00:06:28.547 sys 0m0.762s 00:06:28.547 09:43:29 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.547 09:43:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.547 ************************************ 00:06:28.547 END TEST alias_rpc 00:06:28.547 ************************************ 00:06:28.547 09:43:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:28.547 09:43:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:28.547 09:43:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.547 09:43:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.547 09:43:29 -- common/autotest_common.sh@10 -- # set +x 00:06:28.547 ************************************ 00:06:28.547 START TEST spdkcli_tcp 00:06:28.547 ************************************ 00:06:28.547 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:28.805 * Looking for test storage... 00:06:28.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.805 09:43:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.805 --rc genhtml_branch_coverage=1 00:06:28.805 --rc genhtml_function_coverage=1 00:06:28.805 --rc genhtml_legend=1 00:06:28.805 --rc geninfo_all_blocks=1 00:06:28.805 --rc geninfo_unexecuted_blocks=1 00:06:28.805 00:06:28.805 ' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.805 --rc genhtml_branch_coverage=1 00:06:28.805 --rc genhtml_function_coverage=1 00:06:28.805 --rc genhtml_legend=1 00:06:28.805 --rc geninfo_all_blocks=1 00:06:28.805 --rc geninfo_unexecuted_blocks=1 00:06:28.805 00:06:28.805 ' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.805 --rc genhtml_branch_coverage=1 00:06:28.805 --rc genhtml_function_coverage=1 00:06:28.805 --rc genhtml_legend=1 00:06:28.805 --rc geninfo_all_blocks=1 00:06:28.805 --rc geninfo_unexecuted_blocks=1 00:06:28.805 00:06:28.805 ' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.805 --rc genhtml_branch_coverage=1 00:06:28.805 --rc genhtml_function_coverage=1 00:06:28.805 --rc genhtml_legend=1 00:06:28.805 --rc geninfo_all_blocks=1 00:06:28.805 --rc geninfo_unexecuted_blocks=1 00:06:28.805 00:06:28.805 ' 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58077 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58077 00:06:28.805 09:43:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:28.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58077 ']' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.805 09:43:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:29.062 [2024-11-27 09:43:29.973615] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:29.062 [2024-11-27 09:43:29.973887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:06:29.062 [2024-11-27 09:43:30.154985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.320 [2024-11-27 09:43:30.299152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.320 [2024-11-27 09:43:30.299213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.256 09:43:31 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.256 09:43:31 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:30.256 09:43:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58100 00:06:30.256 09:43:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:30.256 09:43:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:30.516 [ 00:06:30.516 "bdev_malloc_delete", 00:06:30.516 "bdev_malloc_create", 00:06:30.516 "bdev_null_resize", 00:06:30.516 "bdev_null_delete", 00:06:30.516 "bdev_null_create", 00:06:30.516 "bdev_nvme_cuse_unregister", 00:06:30.516 "bdev_nvme_cuse_register", 00:06:30.516 "bdev_opal_new_user", 00:06:30.516 "bdev_opal_set_lock_state", 00:06:30.516 "bdev_opal_delete", 00:06:30.516 "bdev_opal_get_info", 00:06:30.516 "bdev_opal_create", 00:06:30.516 "bdev_nvme_opal_revert", 00:06:30.516 "bdev_nvme_opal_init", 00:06:30.516 "bdev_nvme_send_cmd", 00:06:30.516 "bdev_nvme_set_keys", 00:06:30.516 "bdev_nvme_get_path_iostat", 00:06:30.516 "bdev_nvme_get_mdns_discovery_info", 00:06:30.516 "bdev_nvme_stop_mdns_discovery", 00:06:30.516 "bdev_nvme_start_mdns_discovery", 00:06:30.516 "bdev_nvme_set_multipath_policy", 00:06:30.516 "bdev_nvme_set_preferred_path", 00:06:30.516 "bdev_nvme_get_io_paths", 00:06:30.516 "bdev_nvme_remove_error_injection", 00:06:30.516 "bdev_nvme_add_error_injection", 00:06:30.516 "bdev_nvme_get_discovery_info", 00:06:30.516 "bdev_nvme_stop_discovery", 00:06:30.516 "bdev_nvme_start_discovery", 00:06:30.516 "bdev_nvme_get_controller_health_info", 00:06:30.516 "bdev_nvme_disable_controller", 00:06:30.516 "bdev_nvme_enable_controller", 00:06:30.516 "bdev_nvme_reset_controller", 00:06:30.516 "bdev_nvme_get_transport_statistics", 00:06:30.516 "bdev_nvme_apply_firmware", 00:06:30.516 "bdev_nvme_detach_controller", 00:06:30.516 "bdev_nvme_get_controllers", 00:06:30.516 "bdev_nvme_attach_controller", 00:06:30.516 "bdev_nvme_set_hotplug", 00:06:30.516 "bdev_nvme_set_options", 00:06:30.516 "bdev_passthru_delete", 00:06:30.516 "bdev_passthru_create", 00:06:30.516 "bdev_lvol_set_parent_bdev", 00:06:30.516 "bdev_lvol_set_parent", 00:06:30.516 "bdev_lvol_check_shallow_copy", 00:06:30.516 "bdev_lvol_start_shallow_copy", 00:06:30.516 "bdev_lvol_grow_lvstore", 00:06:30.516 "bdev_lvol_get_lvols", 00:06:30.516 "bdev_lvol_get_lvstores", 00:06:30.516 "bdev_lvol_delete", 00:06:30.516 "bdev_lvol_set_read_only", 00:06:30.516 "bdev_lvol_resize", 00:06:30.516 "bdev_lvol_decouple_parent", 00:06:30.516 "bdev_lvol_inflate", 00:06:30.516 "bdev_lvol_rename", 00:06:30.516 "bdev_lvol_clone_bdev", 00:06:30.516 "bdev_lvol_clone", 00:06:30.516 "bdev_lvol_snapshot", 00:06:30.516 "bdev_lvol_create", 00:06:30.516 "bdev_lvol_delete_lvstore", 00:06:30.516 "bdev_lvol_rename_lvstore", 00:06:30.516 "bdev_lvol_create_lvstore", 00:06:30.516 "bdev_raid_set_options", 00:06:30.516 "bdev_raid_remove_base_bdev", 00:06:30.516 "bdev_raid_add_base_bdev", 00:06:30.516 "bdev_raid_delete", 00:06:30.516 "bdev_raid_create", 00:06:30.516 "bdev_raid_get_bdevs", 00:06:30.516 "bdev_error_inject_error", 00:06:30.516 "bdev_error_delete", 00:06:30.516 "bdev_error_create", 00:06:30.516 "bdev_split_delete", 00:06:30.516 "bdev_split_create", 00:06:30.516 "bdev_delay_delete", 00:06:30.516 "bdev_delay_create", 00:06:30.516 "bdev_delay_update_latency", 00:06:30.516 "bdev_zone_block_delete", 00:06:30.516 "bdev_zone_block_create", 00:06:30.516 "blobfs_create", 00:06:30.516 "blobfs_detect", 00:06:30.516 "blobfs_set_cache_size", 00:06:30.516 "bdev_aio_delete", 00:06:30.516 "bdev_aio_rescan", 00:06:30.516 "bdev_aio_create", 00:06:30.516 "bdev_ftl_set_property", 00:06:30.516 "bdev_ftl_get_properties", 00:06:30.516 "bdev_ftl_get_stats", 00:06:30.516 "bdev_ftl_unmap", 00:06:30.516 "bdev_ftl_unload", 00:06:30.516 "bdev_ftl_delete", 00:06:30.516 "bdev_ftl_load", 00:06:30.516 "bdev_ftl_create", 00:06:30.516 "bdev_virtio_attach_controller", 00:06:30.516 "bdev_virtio_scsi_get_devices", 00:06:30.516 "bdev_virtio_detach_controller", 00:06:30.516 "bdev_virtio_blk_set_hotplug", 00:06:30.516 "bdev_iscsi_delete", 00:06:30.516 "bdev_iscsi_create", 00:06:30.516 "bdev_iscsi_set_options", 00:06:30.516 "accel_error_inject_error", 00:06:30.516 "ioat_scan_accel_module", 00:06:30.516 "dsa_scan_accel_module", 00:06:30.516 "iaa_scan_accel_module", 00:06:30.516 "keyring_file_remove_key", 00:06:30.516 "keyring_file_add_key", 00:06:30.516 "keyring_linux_set_options", 00:06:30.516 "fsdev_aio_delete", 00:06:30.516 "fsdev_aio_create", 00:06:30.516 "iscsi_get_histogram", 00:06:30.516 "iscsi_enable_histogram", 00:06:30.516 "iscsi_set_options", 00:06:30.516 "iscsi_get_auth_groups", 00:06:30.516 "iscsi_auth_group_remove_secret", 00:06:30.516 "iscsi_auth_group_add_secret", 00:06:30.516 "iscsi_delete_auth_group", 00:06:30.516 "iscsi_create_auth_group", 00:06:30.516 "iscsi_set_discovery_auth", 00:06:30.516 "iscsi_get_options", 00:06:30.516 "iscsi_target_node_request_logout", 00:06:30.516 "iscsi_target_node_set_redirect", 00:06:30.516 "iscsi_target_node_set_auth", 00:06:30.516 "iscsi_target_node_add_lun", 00:06:30.516 "iscsi_get_stats", 00:06:30.516 "iscsi_get_connections", 00:06:30.517 "iscsi_portal_group_set_auth", 00:06:30.517 "iscsi_start_portal_group", 00:06:30.517 "iscsi_delete_portal_group", 00:06:30.517 "iscsi_create_portal_group", 00:06:30.517 "iscsi_get_portal_groups", 00:06:30.517 "iscsi_delete_target_node", 00:06:30.517 "iscsi_target_node_remove_pg_ig_maps", 00:06:30.517 "iscsi_target_node_add_pg_ig_maps", 00:06:30.517 "iscsi_create_target_node", 00:06:30.517 "iscsi_get_target_nodes", 00:06:30.517 "iscsi_delete_initiator_group", 00:06:30.517 "iscsi_initiator_group_remove_initiators", 00:06:30.517 "iscsi_initiator_group_add_initiators", 00:06:30.517 "iscsi_create_initiator_group", 00:06:30.517 "iscsi_get_initiator_groups", 00:06:30.517 "nvmf_set_crdt", 00:06:30.517 "nvmf_set_config", 00:06:30.517 "nvmf_set_max_subsystems", 00:06:30.517 "nvmf_stop_mdns_prr", 00:06:30.517 "nvmf_publish_mdns_prr", 00:06:30.517 "nvmf_subsystem_get_listeners", 00:06:30.517 "nvmf_subsystem_get_qpairs", 00:06:30.517 "nvmf_subsystem_get_controllers", 00:06:30.517 "nvmf_get_stats", 00:06:30.517 "nvmf_get_transports", 00:06:30.517 "nvmf_create_transport", 00:06:30.517 "nvmf_get_targets", 00:06:30.517 "nvmf_delete_target", 00:06:30.517 "nvmf_create_target", 00:06:30.517 "nvmf_subsystem_allow_any_host", 00:06:30.517 "nvmf_subsystem_set_keys", 00:06:30.517 "nvmf_subsystem_remove_host", 00:06:30.517 "nvmf_subsystem_add_host", 00:06:30.517 "nvmf_ns_remove_host", 00:06:30.517 "nvmf_ns_add_host", 00:06:30.517 "nvmf_subsystem_remove_ns", 00:06:30.517 "nvmf_subsystem_set_ns_ana_group", 00:06:30.517 "nvmf_subsystem_add_ns", 00:06:30.517 "nvmf_subsystem_listener_set_ana_state", 00:06:30.517 "nvmf_discovery_get_referrals", 00:06:30.517 "nvmf_discovery_remove_referral", 00:06:30.517 "nvmf_discovery_add_referral", 00:06:30.517 "nvmf_subsystem_remove_listener", 00:06:30.517 "nvmf_subsystem_add_listener", 00:06:30.517 "nvmf_delete_subsystem", 00:06:30.517 "nvmf_create_subsystem", 00:06:30.517 "nvmf_get_subsystems", 00:06:30.517 "env_dpdk_get_mem_stats", 00:06:30.517 "nbd_get_disks", 00:06:30.517 "nbd_stop_disk", 00:06:30.517 "nbd_start_disk", 00:06:30.517 "ublk_recover_disk", 00:06:30.517 "ublk_get_disks", 00:06:30.517 "ublk_stop_disk", 00:06:30.517 "ublk_start_disk", 00:06:30.517 "ublk_destroy_target", 00:06:30.517 "ublk_create_target", 00:06:30.517 "virtio_blk_create_transport", 00:06:30.517 "virtio_blk_get_transports", 00:06:30.517 "vhost_controller_set_coalescing", 00:06:30.517 "vhost_get_controllers", 00:06:30.517 "vhost_delete_controller", 00:06:30.517 "vhost_create_blk_controller", 00:06:30.517 "vhost_scsi_controller_remove_target", 00:06:30.517 "vhost_scsi_controller_add_target", 00:06:30.517 "vhost_start_scsi_controller", 00:06:30.517 "vhost_create_scsi_controller", 00:06:30.517 "thread_set_cpumask", 00:06:30.517 "scheduler_set_options", 00:06:30.517 "framework_get_governor", 00:06:30.517 "framework_get_scheduler", 00:06:30.517 "framework_set_scheduler", 00:06:30.517 "framework_get_reactors", 00:06:30.517 "thread_get_io_channels", 00:06:30.517 "thread_get_pollers", 00:06:30.517 "thread_get_stats", 00:06:30.517 "framework_monitor_context_switch", 00:06:30.517 "spdk_kill_instance", 00:06:30.517 "log_enable_timestamps", 00:06:30.517 "log_get_flags", 00:06:30.517 "log_clear_flag", 00:06:30.517 "log_set_flag", 00:06:30.517 "log_get_level", 00:06:30.517 "log_set_level", 00:06:30.517 "log_get_print_level", 00:06:30.517 "log_set_print_level", 00:06:30.517 "framework_enable_cpumask_locks", 00:06:30.517 "framework_disable_cpumask_locks", 00:06:30.517 "framework_wait_init", 00:06:30.517 "framework_start_init", 00:06:30.517 "scsi_get_devices", 00:06:30.517 "bdev_get_histogram", 00:06:30.517 "bdev_enable_histogram", 00:06:30.517 "bdev_set_qos_limit", 00:06:30.517 "bdev_set_qd_sampling_period", 00:06:30.517 "bdev_get_bdevs", 00:06:30.517 "bdev_reset_iostat", 00:06:30.517 "bdev_get_iostat", 00:06:30.517 "bdev_examine", 00:06:30.517 "bdev_wait_for_examine", 00:06:30.517 "bdev_set_options", 00:06:30.517 "accel_get_stats", 00:06:30.517 "accel_set_options", 00:06:30.517 "accel_set_driver", 00:06:30.517 "accel_crypto_key_destroy", 00:06:30.517 "accel_crypto_keys_get", 00:06:30.517 "accel_crypto_key_create", 00:06:30.517 "accel_assign_opc", 00:06:30.517 "accel_get_module_info", 00:06:30.517 "accel_get_opc_assignments", 00:06:30.517 "vmd_rescan", 00:06:30.517 "vmd_remove_device", 00:06:30.517 "vmd_enable", 00:06:30.517 "sock_get_default_impl", 00:06:30.517 "sock_set_default_impl", 00:06:30.517 "sock_impl_set_options", 00:06:30.517 "sock_impl_get_options", 00:06:30.517 "iobuf_get_stats", 00:06:30.517 "iobuf_set_options", 00:06:30.517 "keyring_get_keys", 00:06:30.517 "framework_get_pci_devices", 00:06:30.517 "framework_get_config", 00:06:30.517 "framework_get_subsystems", 00:06:30.517 "fsdev_set_opts", 00:06:30.517 "fsdev_get_opts", 00:06:30.517 "trace_get_info", 00:06:30.517 "trace_get_tpoint_group_mask", 00:06:30.517 "trace_disable_tpoint_group", 00:06:30.517 "trace_enable_tpoint_group", 00:06:30.517 "trace_clear_tpoint_mask", 00:06:30.517 "trace_set_tpoint_mask", 00:06:30.517 "notify_get_notifications", 00:06:30.517 "notify_get_types", 00:06:30.517 "spdk_get_version", 00:06:30.517 "rpc_get_methods" 00:06:30.517 ] 00:06:30.517 09:43:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.517 09:43:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:30.517 09:43:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58077 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58077 ']' 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58077 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.517 09:43:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58077 00:06:30.777 killing process with pid 58077 00:06:30.777 09:43:31 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.777 09:43:31 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.777 09:43:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58077' 00:06:30.777 09:43:31 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58077 00:06:30.777 09:43:31 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58077 00:06:33.323 ************************************ 00:06:33.323 END TEST spdkcli_tcp 00:06:33.323 ************************************ 00:06:33.323 00:06:33.323 real 0m4.725s 00:06:33.323 user 0m8.295s 00:06:33.323 sys 0m0.840s 00:06:33.323 09:43:34 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.323 09:43:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:33.323 09:43:34 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.323 09:43:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.323 09:43:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.323 09:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:33.323 ************************************ 00:06:33.323 START TEST dpdk_mem_utility 00:06:33.323 ************************************ 00:06:33.323 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:33.582 * Looking for test storage... 00:06:33.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:33.582 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.582 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.582 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.582 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.582 09:43:34 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.583 09:43:34 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.583 --rc genhtml_branch_coverage=1 00:06:33.583 --rc genhtml_function_coverage=1 00:06:33.583 --rc genhtml_legend=1 00:06:33.583 --rc geninfo_all_blocks=1 00:06:33.583 --rc geninfo_unexecuted_blocks=1 00:06:33.583 00:06:33.583 ' 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.583 --rc genhtml_branch_coverage=1 00:06:33.583 --rc genhtml_function_coverage=1 00:06:33.583 --rc genhtml_legend=1 00:06:33.583 --rc geninfo_all_blocks=1 00:06:33.583 --rc geninfo_unexecuted_blocks=1 00:06:33.583 00:06:33.583 ' 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.583 --rc genhtml_branch_coverage=1 00:06:33.583 --rc genhtml_function_coverage=1 00:06:33.583 --rc genhtml_legend=1 00:06:33.583 --rc geninfo_all_blocks=1 00:06:33.583 --rc geninfo_unexecuted_blocks=1 00:06:33.583 00:06:33.583 ' 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.583 --rc genhtml_branch_coverage=1 00:06:33.583 --rc genhtml_function_coverage=1 00:06:33.583 --rc genhtml_legend=1 00:06:33.583 --rc geninfo_all_blocks=1 00:06:33.583 --rc geninfo_unexecuted_blocks=1 00:06:33.583 00:06:33.583 ' 00:06:33.583 09:43:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:33.583 09:43:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58205 00:06:33.583 09:43:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.583 09:43:34 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58205 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58205 ']' 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.583 09:43:34 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.842 [2024-11-27 09:43:34.754824] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:33.842 [2024-11-27 09:43:34.754961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58205 ] 00:06:33.842 [2024-11-27 09:43:34.931640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.102 [2024-11-27 09:43:35.072463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.039 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.039 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:35.039 09:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:35.039 09:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:35.039 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.039 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:35.039 { 00:06:35.039 "filename": "/tmp/spdk_mem_dump.txt" 00:06:35.039 } 00:06:35.039 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.039 09:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:35.039 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:35.039 1 heaps totaling size 824.000000 MiB 00:06:35.039 size: 824.000000 MiB heap id: 0 00:06:35.039 end heaps---------- 00:06:35.039 9 mempools totaling size 603.782043 MiB 00:06:35.039 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:35.039 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:35.039 size: 100.555481 MiB name: bdev_io_58205 00:06:35.039 size: 50.003479 MiB name: msgpool_58205 00:06:35.039 size: 36.509338 MiB name: fsdev_io_58205 00:06:35.039 size: 21.763794 MiB name: PDU_Pool 00:06:35.039 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:35.039 size: 4.133484 MiB name: evtpool_58205 00:06:35.039 size: 0.026123 MiB name: Session_Pool 00:06:35.039 end mempools------- 00:06:35.039 6 memzones totaling size 4.142822 MiB 00:06:35.039 size: 1.000366 MiB name: RG_ring_0_58205 00:06:35.039 size: 1.000366 MiB name: RG_ring_1_58205 00:06:35.039 size: 1.000366 MiB name: RG_ring_4_58205 00:06:35.039 size: 1.000366 MiB name: RG_ring_5_58205 00:06:35.039 size: 0.125366 MiB name: RG_ring_2_58205 00:06:35.039 size: 0.015991 MiB name: RG_ring_3_58205 00:06:35.039 end memzones------- 00:06:35.039 09:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:35.301 heap id: 0 total size: 824.000000 MiB number of busy elements: 326 number of free elements: 18 00:06:35.301 list of free elements. size: 16.778687 MiB 00:06:35.301 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:35.301 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:35.301 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:35.301 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:35.301 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:35.301 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:35.301 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:35.301 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:35.301 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:35.301 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:35.301 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:35.301 element at address: 0x20001b400000 with size: 0.559998 MiB 00:06:35.301 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:35.301 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:35.301 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:35.301 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:35.301 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:35.301 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:35.301 list of standard malloc elements. size: 199.290405 MiB 00:06:35.301 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:35.301 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:35.301 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:35.301 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:35.301 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:35.301 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:35.301 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:35.301 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:35.301 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:35.301 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:35.301 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:35.301 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:35.301 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:35.302 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:35.303 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:35.303 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:35.303 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:35.303 list of memzone associated elements. size: 607.930908 MiB 00:06:35.303 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:35.303 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:35.303 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:35.303 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:35.303 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:35.303 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58205_0 00:06:35.303 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:35.303 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58205_0 00:06:35.303 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:35.303 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58205_0 00:06:35.303 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:35.303 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:35.303 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:35.303 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:35.304 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:35.304 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58205_0 00:06:35.304 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:35.304 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58205 00:06:35.304 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:35.304 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58205 00:06:35.304 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:35.304 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:35.304 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:35.304 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:35.304 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:35.304 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:35.304 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:35.304 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:35.304 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:35.304 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58205 00:06:35.304 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:35.304 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58205 00:06:35.304 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:35.304 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58205 00:06:35.304 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:35.304 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58205 00:06:35.304 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:35.304 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58205 00:06:35.304 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:35.304 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58205 00:06:35.304 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:35.304 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:35.304 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:35.304 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:35.304 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:35.304 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:35.304 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:35.304 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58205 00:06:35.304 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:35.304 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58205 00:06:35.304 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:35.304 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:35.304 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:35.304 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:35.304 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:35.304 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58205 00:06:35.304 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:35.304 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:35.304 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:35.304 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58205 00:06:35.304 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:35.304 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58205 00:06:35.304 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:35.304 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58205 00:06:35.304 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:35.304 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:35.304 09:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:35.304 09:43:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58205 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58205 ']' 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58205 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58205 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58205' 00:06:35.304 killing process with pid 58205 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58205 00:06:35.304 09:43:36 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58205 00:06:37.844 00:06:37.844 real 0m4.514s 00:06:37.844 user 0m4.261s 00:06:37.844 sys 0m0.729s 00:06:37.844 ************************************ 00:06:37.844 END TEST dpdk_mem_utility 00:06:37.844 ************************************ 00:06:37.844 09:43:38 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.844 09:43:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:38.103 09:43:38 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:38.103 09:43:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.103 09:43:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.103 09:43:38 -- common/autotest_common.sh@10 -- # set +x 00:06:38.103 ************************************ 00:06:38.103 START TEST event 00:06:38.103 ************************************ 00:06:38.104 09:43:38 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:38.104 * Looking for test storage... 00:06:38.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:38.104 09:43:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.104 09:43:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.104 09:43:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.104 09:43:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.104 09:43:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.104 09:43:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.104 09:43:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.104 09:43:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.104 09:43:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.104 09:43:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.104 09:43:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.104 09:43:39 event -- scripts/common.sh@344 -- # case "$op" in 00:06:38.104 09:43:39 event -- scripts/common.sh@345 -- # : 1 00:06:38.104 09:43:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.104 09:43:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.104 09:43:39 event -- scripts/common.sh@365 -- # decimal 1 00:06:38.104 09:43:39 event -- scripts/common.sh@353 -- # local d=1 00:06:38.104 09:43:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.104 09:43:39 event -- scripts/common.sh@355 -- # echo 1 00:06:38.104 09:43:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.104 09:43:39 event -- scripts/common.sh@366 -- # decimal 2 00:06:38.104 09:43:39 event -- scripts/common.sh@353 -- # local d=2 00:06:38.104 09:43:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.104 09:43:39 event -- scripts/common.sh@355 -- # echo 2 00:06:38.104 09:43:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.104 09:43:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.104 09:43:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.104 09:43:39 event -- scripts/common.sh@368 -- # return 0 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.104 --rc genhtml_branch_coverage=1 00:06:38.104 --rc genhtml_function_coverage=1 00:06:38.104 --rc genhtml_legend=1 00:06:38.104 --rc geninfo_all_blocks=1 00:06:38.104 --rc geninfo_unexecuted_blocks=1 00:06:38.104 00:06:38.104 ' 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.104 --rc genhtml_branch_coverage=1 00:06:38.104 --rc genhtml_function_coverage=1 00:06:38.104 --rc genhtml_legend=1 00:06:38.104 --rc geninfo_all_blocks=1 00:06:38.104 --rc geninfo_unexecuted_blocks=1 00:06:38.104 00:06:38.104 ' 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.104 --rc genhtml_branch_coverage=1 00:06:38.104 --rc genhtml_function_coverage=1 00:06:38.104 --rc genhtml_legend=1 00:06:38.104 --rc geninfo_all_blocks=1 00:06:38.104 --rc geninfo_unexecuted_blocks=1 00:06:38.104 00:06:38.104 ' 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:38.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.104 --rc genhtml_branch_coverage=1 00:06:38.104 --rc genhtml_function_coverage=1 00:06:38.104 --rc genhtml_legend=1 00:06:38.104 --rc geninfo_all_blocks=1 00:06:38.104 --rc geninfo_unexecuted_blocks=1 00:06:38.104 00:06:38.104 ' 00:06:38.104 09:43:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:38.104 09:43:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:38.104 09:43:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:38.104 09:43:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.104 09:43:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.363 ************************************ 00:06:38.363 START TEST event_perf 00:06:38.363 ************************************ 00:06:38.363 09:43:39 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:38.363 Running I/O for 1 seconds...[2024-11-27 09:43:39.289039] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:38.363 [2024-11-27 09:43:39.289236] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58318 ] 00:06:38.363 [2024-11-27 09:43:39.459985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.622 [2024-11-27 09:43:39.609514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.622 [2024-11-27 09:43:39.609758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.622 [2024-11-27 09:43:39.609802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.622 Running I/O for 1 seconds...[2024-11-27 09:43:39.609698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.001 00:06:40.001 lcore 0: 201054 00:06:40.001 lcore 1: 201055 00:06:40.001 lcore 2: 201055 00:06:40.001 lcore 3: 201058 00:06:40.001 done. 00:06:40.001 00:06:40.001 ************************************ 00:06:40.001 END TEST event_perf 00:06:40.001 ************************************ 00:06:40.001 real 0m1.635s 00:06:40.001 user 0m4.375s 00:06:40.001 sys 0m0.137s 00:06:40.001 09:43:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.001 09:43:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:40.001 09:43:40 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:40.001 09:43:40 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:40.001 09:43:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.001 09:43:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.001 ************************************ 00:06:40.001 START TEST event_reactor 00:06:40.001 ************************************ 00:06:40.001 09:43:40 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:40.001 [2024-11-27 09:43:40.998440] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:40.001 [2024-11-27 09:43:40.998570] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58358 ] 00:06:40.261 [2024-11-27 09:43:41.178508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.261 [2024-11-27 09:43:41.322110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.640 test_start 00:06:41.640 oneshot 00:06:41.640 tick 100 00:06:41.640 tick 100 00:06:41.640 tick 250 00:06:41.640 tick 100 00:06:41.640 tick 100 00:06:41.640 tick 100 00:06:41.640 tick 250 00:06:41.640 tick 500 00:06:41.640 tick 100 00:06:41.640 tick 100 00:06:41.640 tick 250 00:06:41.640 tick 100 00:06:41.640 tick 100 00:06:41.640 test_end 00:06:41.640 00:06:41.640 real 0m1.633s 00:06:41.640 user 0m1.404s 00:06:41.640 sys 0m0.121s 00:06:41.640 ************************************ 00:06:41.640 END TEST event_reactor 00:06:41.640 ************************************ 00:06:41.640 09:43:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.640 09:43:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:41.640 09:43:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.640 09:43:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:41.640 09:43:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.640 09:43:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.640 ************************************ 00:06:41.640 START TEST event_reactor_perf 00:06:41.640 ************************************ 00:06:41.640 09:43:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:41.640 [2024-11-27 09:43:42.701417] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:41.640 [2024-11-27 09:43:42.701673] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58400 ] 00:06:41.901 [2024-11-27 09:43:42.883785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.159 [2024-11-27 09:43:43.034225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.535 test_start 00:06:43.535 test_end 00:06:43.535 Performance: 368302 events per second 00:06:43.535 00:06:43.535 real 0m1.621s 00:06:43.535 user 0m1.408s 00:06:43.535 sys 0m0.104s 00:06:43.535 ************************************ 00:06:43.535 END TEST event_reactor_perf 00:06:43.535 ************************************ 00:06:43.536 09:43:44 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.536 09:43:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.536 09:43:44 event -- event/event.sh@49 -- # uname -s 00:06:43.536 09:43:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:43.536 09:43:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:43.536 09:43:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.536 09:43:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.536 09:43:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.536 ************************************ 00:06:43.536 START TEST event_scheduler 00:06:43.536 ************************************ 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:43.536 * Looking for test storage... 00:06:43.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.536 09:43:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.536 --rc genhtml_branch_coverage=1 00:06:43.536 --rc genhtml_function_coverage=1 00:06:43.536 --rc genhtml_legend=1 00:06:43.536 --rc geninfo_all_blocks=1 00:06:43.536 --rc geninfo_unexecuted_blocks=1 00:06:43.536 00:06:43.536 ' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.536 --rc genhtml_branch_coverage=1 00:06:43.536 --rc genhtml_function_coverage=1 00:06:43.536 --rc genhtml_legend=1 00:06:43.536 --rc geninfo_all_blocks=1 00:06:43.536 --rc geninfo_unexecuted_blocks=1 00:06:43.536 00:06:43.536 ' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.536 --rc genhtml_branch_coverage=1 00:06:43.536 --rc genhtml_function_coverage=1 00:06:43.536 --rc genhtml_legend=1 00:06:43.536 --rc geninfo_all_blocks=1 00:06:43.536 --rc geninfo_unexecuted_blocks=1 00:06:43.536 00:06:43.536 ' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.536 --rc genhtml_branch_coverage=1 00:06:43.536 --rc genhtml_function_coverage=1 00:06:43.536 --rc genhtml_legend=1 00:06:43.536 --rc geninfo_all_blocks=1 00:06:43.536 --rc geninfo_unexecuted_blocks=1 00:06:43.536 00:06:43.536 ' 00:06:43.536 09:43:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:43.536 09:43:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58476 00:06:43.536 09:43:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:43.536 09:43:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.536 09:43:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58476 00:06:43.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58476 ']' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.536 09:43:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.796 [2024-11-27 09:43:44.677733] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:43.796 [2024-11-27 09:43:44.677886] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58476 ] 00:06:43.796 [2024-11-27 09:43:44.857979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.058 [2024-11-27 09:43:45.010299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.058 [2024-11-27 09:43:45.010501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.058 [2024-11-27 09:43:45.010663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.058 [2024-11-27 09:43:45.010866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:44.627 09:43:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.627 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.627 POWER: Cannot set governor of lcore 0 to performance 00:06:44.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.627 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.627 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:44.627 POWER: Cannot set governor of lcore 0 to userspace 00:06:44.627 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:44.627 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:44.627 POWER: Unable to set Power Management Environment for lcore 0 00:06:44.627 [2024-11-27 09:43:45.709235] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:44.627 [2024-11-27 09:43:45.709313] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:44.627 [2024-11-27 09:43:45.709387] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:44.627 [2024-11-27 09:43:45.709503] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:44.627 [2024-11-27 09:43:45.709588] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:44.627 [2024-11-27 09:43:45.709681] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.627 09:43:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.627 09:43:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.193 [2024-11-27 09:43:46.137678] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:45.193 09:43:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.193 09:43:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:45.193 09:43:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.193 09:43:46 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 ************************************ 00:06:45.194 START TEST scheduler_create_thread 00:06:45.194 ************************************ 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 2 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 3 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 4 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 5 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 6 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 7 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 8 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 9 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.194 10 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.194 09:43:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.576 09:43:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.576 09:43:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.576 09:43:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.576 09:43:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.576 09:43:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.516 09:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.516 09:43:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:47.516 09:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.516 09:43:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.455 09:43:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.455 09:43:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:48.455 09:43:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:48.455 09:43:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.455 09:43:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.024 09:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.024 ************************************ 00:06:49.024 END TEST scheduler_create_thread 00:06:49.024 ************************************ 00:06:49.024 00:06:49.024 real 0m3.885s 00:06:49.024 user 0m0.038s 00:06:49.024 sys 0m0.005s 00:06:49.024 09:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.024 09:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.024 09:43:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.024 09:43:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58476 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58476 ']' 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58476 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58476 00:06:49.024 killing process with pid 58476 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58476' 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58476 00:06:49.024 09:43:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58476 00:06:49.593 [2024-11-27 09:43:50.417394] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:51.011 00:06:51.011 real 0m7.352s 00:06:51.011 user 0m15.433s 00:06:51.011 sys 0m0.659s 00:06:51.011 ************************************ 00:06:51.011 END TEST event_scheduler 00:06:51.011 ************************************ 00:06:51.011 09:43:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.011 09:43:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:51.011 09:43:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:51.011 09:43:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:51.011 09:43:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.011 09:43:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.011 09:43:51 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.011 ************************************ 00:06:51.011 START TEST app_repeat 00:06:51.011 ************************************ 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58599 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:51.011 Process app_repeat pid: 58599 00:06:51.011 spdk_app_start Round 0 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58599' 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:51.011 09:43:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58599 /var/tmp/spdk-nbd.sock 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:06:51.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.011 09:43:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.011 [2024-11-27 09:43:51.835463] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:06:51.011 [2024-11-27 09:43:51.835576] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58599 ] 00:06:51.011 [2024-11-27 09:43:52.013625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.271 [2024-11-27 09:43:52.150263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.271 [2024-11-27 09:43:52.150308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.839 09:43:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.839 09:43:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:51.839 09:43:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.839 Malloc0 00:06:52.099 09:43:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:52.358 Malloc1 00:06:52.358 09:43:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:52.358 /dev/nbd0 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:52.358 09:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:52.358 09:43:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:52.358 09:43:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.358 09:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.358 09:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.358 09:43:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:52.358 09:43:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.359 09:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.359 09:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.359 09:43:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.359 1+0 records in 00:06:52.359 1+0 records out 00:06:52.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365309 s, 11.2 MB/s 00:06:52.359 09:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.618 09:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.618 09:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.618 09:43:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:52.618 /dev/nbd1 00:06:52.618 09:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:52.618 09:43:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:52.618 1+0 records in 00:06:52.618 1+0 records out 00:06:52.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397888 s, 10.3 MB/s 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:52.618 09:43:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:52.878 09:43:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:52.878 09:43:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:52.878 { 00:06:52.878 "nbd_device": "/dev/nbd0", 00:06:52.878 "bdev_name": "Malloc0" 00:06:52.878 }, 00:06:52.878 { 00:06:52.878 "nbd_device": "/dev/nbd1", 00:06:52.878 "bdev_name": "Malloc1" 00:06:52.878 } 00:06:52.878 ]' 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.878 09:43:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:52.878 { 00:06:52.878 "nbd_device": "/dev/nbd0", 00:06:52.878 "bdev_name": "Malloc0" 00:06:52.878 }, 00:06:52.878 { 00:06:52.878 "nbd_device": "/dev/nbd1", 00:06:52.878 "bdev_name": "Malloc1" 00:06:52.878 } 00:06:52.878 ]' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.139 /dev/nbd1' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.139 /dev/nbd1' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.139 256+0 records in 00:06:53.139 256+0 records out 00:06:53.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134779 s, 77.8 MB/s 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.139 256+0 records in 00:06:53.139 256+0 records out 00:06:53.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247315 s, 42.4 MB/s 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.139 256+0 records in 00:06:53.139 256+0 records out 00:06:53.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258948 s, 40.5 MB/s 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.139 09:43:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.398 09:43:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.657 09:43:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.658 09:43:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.658 09:43:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:53.917 09:43:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:53.917 09:43:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.487 09:43:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:55.868 [2024-11-27 09:43:56.597793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.868 [2024-11-27 09:43:56.734239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.868 [2024-11-27 09:43:56.734243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.868 [2024-11-27 09:43:56.967880] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.868 [2024-11-27 09:43:56.967985] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:57.246 spdk_app_start Round 1 00:06:57.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.246 09:43:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.246 09:43:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:57.246 09:43:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58599 /var/tmp/spdk-nbd.sock 00:06:57.246 09:43:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:06:57.246 09:43:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.246 09:43:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.246 09:43:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.246 09:43:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.246 09:43:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.511 09:43:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.511 09:43:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:57.511 09:43:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:57.779 Malloc0 00:06:57.779 09:43:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.039 Malloc1 00:06:58.039 09:43:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.039 09:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.300 /dev/nbd0 00:06:58.300 09:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.300 09:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.300 1+0 records in 00:06:58.300 1+0 records out 00:06:58.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362869 s, 11.3 MB/s 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.300 09:43:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.300 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.300 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.300 09:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.561 /dev/nbd1 00:06:58.561 09:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.561 09:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.561 09:43:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.562 09:43:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.562 1+0 records in 00:06:58.562 1+0 records out 00:06:58.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226824 s, 18.1 MB/s 00:06:58.562 09:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.562 09:43:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.562 09:43:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.562 09:43:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.562 09:43:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.562 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.562 09:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.562 09:43:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.562 09:43:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.562 09:43:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.821 09:43:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.821 { 00:06:58.821 "nbd_device": "/dev/nbd0", 00:06:58.821 "bdev_name": "Malloc0" 00:06:58.821 }, 00:06:58.821 { 00:06:58.821 "nbd_device": "/dev/nbd1", 00:06:58.821 "bdev_name": "Malloc1" 00:06:58.821 } 00:06:58.821 ]' 00:06:58.821 09:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.821 09:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.821 { 00:06:58.821 "nbd_device": "/dev/nbd0", 00:06:58.821 "bdev_name": "Malloc0" 00:06:58.821 }, 00:06:58.821 { 00:06:58.821 "nbd_device": "/dev/nbd1", 00:06:58.821 "bdev_name": "Malloc1" 00:06:58.821 } 00:06:58.821 ]' 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.080 /dev/nbd1' 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.080 /dev/nbd1' 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.080 256+0 records in 00:06:59.080 256+0 records out 00:06:59.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120981 s, 86.7 MB/s 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.080 09:43:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.080 256+0 records in 00:06:59.080 256+0 records out 00:06:59.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215962 s, 48.6 MB/s 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.080 256+0 records in 00:06:59.080 256+0 records out 00:06:59.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257054 s, 40.8 MB/s 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.080 09:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.340 09:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.599 09:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.858 09:44:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.858 09:44:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.488 09:44:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:01.442 [2024-11-27 09:44:02.541301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.700 [2024-11-27 09:44:02.678370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.700 [2024-11-27 09:44:02.678429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.959 [2024-11-27 09:44:02.906618] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:01.959 [2024-11-27 09:44:02.906732] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.335 spdk_app_start Round 2 00:07:03.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.335 09:44:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:03.335 09:44:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.335 09:44:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58599 /var/tmp/spdk-nbd.sock 00:07:03.335 09:44:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:07:03.335 09:44:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.335 09:44:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.335 09:44:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.335 09:44:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.335 09:44:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.593 09:44:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.593 09:44:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:03.593 09:44:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.853 Malloc0 00:07:03.853 09:44:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.114 Malloc1 00:07:04.114 09:44:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.114 09:44:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.373 /dev/nbd0 00:07:04.373 09:44:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.373 09:44:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.373 1+0 records in 00:07:04.373 1+0 records out 00:07:04.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409337 s, 10.0 MB/s 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.373 09:44:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.373 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.373 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.373 09:44:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.634 /dev/nbd1 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.634 1+0 records in 00:07:04.634 1+0 records out 00:07:04.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351631 s, 11.6 MB/s 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.634 09:44:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.634 09:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.894 { 00:07:04.894 "nbd_device": "/dev/nbd0", 00:07:04.894 "bdev_name": "Malloc0" 00:07:04.894 }, 00:07:04.894 { 00:07:04.894 "nbd_device": "/dev/nbd1", 00:07:04.894 "bdev_name": "Malloc1" 00:07:04.894 } 00:07:04.894 ]' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.894 { 00:07:04.894 "nbd_device": "/dev/nbd0", 00:07:04.894 "bdev_name": "Malloc0" 00:07:04.894 }, 00:07:04.894 { 00:07:04.894 "nbd_device": "/dev/nbd1", 00:07:04.894 "bdev_name": "Malloc1" 00:07:04.894 } 00:07:04.894 ]' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.894 /dev/nbd1' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.894 /dev/nbd1' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.894 256+0 records in 00:07:04.894 256+0 records out 00:07:04.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00617422 s, 170 MB/s 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.894 256+0 records in 00:07:04.894 256+0 records out 00:07:04.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210161 s, 49.9 MB/s 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.894 256+0 records in 00:07:04.894 256+0 records out 00:07:04.894 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301073 s, 34.8 MB/s 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.894 09:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.155 09:44:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.416 09:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.676 09:44:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.676 09:44:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.246 09:44:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:07.628 [2024-11-27 09:44:08.339813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:07.628 [2024-11-27 09:44:08.471655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.628 [2024-11-27 09:44:08.471660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.628 [2024-11-27 09:44:08.698375] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:07.628 [2024-11-27 09:44:08.698585] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:09.017 09:44:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58599 /var/tmp/spdk-nbd.sock 00:07:09.018 09:44:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58599 ']' 00:07:09.018 09:44:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.018 09:44:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.018 09:44:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.018 09:44:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.018 09:44:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:09.277 09:44:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58599 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58599 ']' 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58599 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58599 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.277 killing process with pid 58599 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58599' 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58599 00:07:09.277 09:44:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58599 00:07:10.712 spdk_app_start is called in Round 0. 00:07:10.712 Shutdown signal received, stop current app iteration 00:07:10.712 Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 reinitialization... 00:07:10.712 spdk_app_start is called in Round 1. 00:07:10.712 Shutdown signal received, stop current app iteration 00:07:10.712 Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 reinitialization... 00:07:10.712 spdk_app_start is called in Round 2. 00:07:10.712 Shutdown signal received, stop current app iteration 00:07:10.712 Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 reinitialization... 00:07:10.712 spdk_app_start is called in Round 3. 00:07:10.712 Shutdown signal received, stop current app iteration 00:07:10.712 09:44:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:10.712 ************************************ 00:07:10.712 END TEST app_repeat 00:07:10.712 ************************************ 00:07:10.712 09:44:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:10.712 00:07:10.712 real 0m19.725s 00:07:10.712 user 0m41.803s 00:07:10.712 sys 0m3.179s 00:07:10.712 09:44:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.712 09:44:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.712 09:44:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:10.712 09:44:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:10.712 09:44:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.712 09:44:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.712 09:44:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.712 ************************************ 00:07:10.712 START TEST cpu_locks 00:07:10.712 ************************************ 00:07:10.712 09:44:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:10.712 * Looking for test storage... 00:07:10.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.713 09:44:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.713 --rc genhtml_branch_coverage=1 00:07:10.713 --rc genhtml_function_coverage=1 00:07:10.713 --rc genhtml_legend=1 00:07:10.713 --rc geninfo_all_blocks=1 00:07:10.713 --rc geninfo_unexecuted_blocks=1 00:07:10.713 00:07:10.713 ' 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.713 --rc genhtml_branch_coverage=1 00:07:10.713 --rc genhtml_function_coverage=1 00:07:10.713 --rc genhtml_legend=1 00:07:10.713 --rc geninfo_all_blocks=1 00:07:10.713 --rc geninfo_unexecuted_blocks=1 00:07:10.713 00:07:10.713 ' 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.713 --rc genhtml_branch_coverage=1 00:07:10.713 --rc genhtml_function_coverage=1 00:07:10.713 --rc genhtml_legend=1 00:07:10.713 --rc geninfo_all_blocks=1 00:07:10.713 --rc geninfo_unexecuted_blocks=1 00:07:10.713 00:07:10.713 ' 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.713 --rc genhtml_branch_coverage=1 00:07:10.713 --rc genhtml_function_coverage=1 00:07:10.713 --rc genhtml_legend=1 00:07:10.713 --rc geninfo_all_blocks=1 00:07:10.713 --rc geninfo_unexecuted_blocks=1 00:07:10.713 00:07:10.713 ' 00:07:10.713 09:44:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:10.713 09:44:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:10.713 09:44:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:10.713 09:44:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.713 09:44:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.713 ************************************ 00:07:10.713 START TEST default_locks 00:07:10.713 ************************************ 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59048 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59048 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59048 ']' 00:07:10.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.713 09:44:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.975 [2024-11-27 09:44:11.919149] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:10.975 [2024-11-27 09:44:11.919438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59048 ] 00:07:10.976 [2024-11-27 09:44:12.098107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.237 [2024-11-27 09:44:12.233306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.175 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.175 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:12.175 09:44:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59048 00:07:12.175 09:44:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59048 00:07:12.175 09:44:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59048 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59048 ']' 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59048 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59048 00:07:12.434 killing process with pid 59048 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59048' 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59048 00:07:12.434 09:44:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59048 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59048 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59048 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59048 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59048 ']' 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.973 ERROR: process (pid: 59048) is no longer running 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.973 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59048) - No such process 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:14.973 00:07:14.973 real 0m4.237s 00:07:14.973 user 0m3.963s 00:07:14.973 sys 0m0.771s 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.973 09:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.973 ************************************ 00:07:14.973 END TEST default_locks 00:07:14.973 ************************************ 00:07:14.973 09:44:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:14.973 09:44:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.973 09:44:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.973 09:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.234 ************************************ 00:07:15.234 START TEST default_locks_via_rpc 00:07:15.234 ************************************ 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59123 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59123 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59123 ']' 00:07:15.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.234 09:44:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.234 [2024-11-27 09:44:16.223610] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:15.234 [2024-11-27 09:44:16.223782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59123 ] 00:07:15.494 [2024-11-27 09:44:16.405137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.494 [2024-11-27 09:44:16.548138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59123 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59123 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59123 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59123 ']' 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59123 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.879 09:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59123 00:07:16.879 killing process with pid 59123 00:07:16.879 09:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.879 09:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.879 09:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59123' 00:07:16.879 09:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59123 00:07:16.879 09:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59123 00:07:20.174 ************************************ 00:07:20.174 END TEST default_locks_via_rpc 00:07:20.174 ************************************ 00:07:20.174 00:07:20.174 real 0m4.559s 00:07:20.174 user 0m4.321s 00:07:20.174 sys 0m0.837s 00:07:20.174 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.174 09:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.174 09:44:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:20.174 09:44:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.174 09:44:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.174 09:44:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.174 ************************************ 00:07:20.174 START TEST non_locking_app_on_locked_coremask 00:07:20.174 ************************************ 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59210 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59210 /var/tmp/spdk.sock 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59210 ']' 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.174 09:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.174 [2024-11-27 09:44:20.844170] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:20.174 [2024-11-27 09:44:20.844452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59210 ] 00:07:20.174 [2024-11-27 09:44:21.026319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.174 [2024-11-27 09:44:21.165490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59226 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59226 /var/tmp/spdk2.sock 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59226 ']' 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.115 09:44:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.375 [2024-11-27 09:44:22.316879] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:21.375 [2024-11-27 09:44:22.317253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59226 ] 00:07:21.375 [2024-11-27 09:44:22.500571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.375 [2024-11-27 09:44:22.500689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.943 [2024-11-27 09:44:22.802334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.854 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.854 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.854 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59210 00:07:23.854 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59210 00:07:23.854 09:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59210 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59210 ']' 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59210 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59210 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.793 killing process with pid 59210 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59210' 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59210 00:07:24.793 09:44:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59210 00:07:30.075 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59226 00:07:30.075 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59226 ']' 00:07:30.075 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59226 00:07:30.075 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:30.075 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.075 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59226 00:07:30.334 killing process with pid 59226 00:07:30.334 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.334 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.334 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59226' 00:07:30.334 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59226 00:07:30.334 09:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59226 00:07:32.868 ************************************ 00:07:32.868 END TEST non_locking_app_on_locked_coremask 00:07:32.868 ************************************ 00:07:32.868 00:07:32.868 real 0m13.146s 00:07:32.868 user 0m13.143s 00:07:32.868 sys 0m1.815s 00:07:32.868 09:44:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.868 09:44:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.868 09:44:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:32.868 09:44:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.868 09:44:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.868 09:44:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.868 ************************************ 00:07:32.868 START TEST locking_app_on_unlocked_coremask 00:07:32.868 ************************************ 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59386 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59386 /var/tmp/spdk.sock 00:07:32.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59386 ']' 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.868 09:44:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.127 [2024-11-27 09:44:34.054299] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:33.127 [2024-11-27 09:44:34.054448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:07:33.127 [2024-11-27 09:44:34.230220] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.127 [2024-11-27 09:44:34.230432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.385 [2024-11-27 09:44:34.374556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59412 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59412 /var/tmp/spdk2.sock 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59412 ']' 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:34.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.324 09:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.584 [2024-11-27 09:44:35.497693] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:34.584 [2024-11-27 09:44:35.497935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59412 ] 00:07:34.584 [2024-11-27 09:44:35.669985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.845 [2024-11-27 09:44:35.953745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.387 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.387 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.387 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59412 00:07:37.387 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59412 00:07:37.387 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59386 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59386 ']' 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59386 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59386 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.647 killing process with pid 59386 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59386' 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59386 00:07:37.647 09:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59386 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59412 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59412 ']' 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59412 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59412 00:07:42.998 killing process with pid 59412 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59412' 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59412 00:07:42.998 09:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59412 00:07:45.538 00:07:45.538 real 0m12.617s 00:07:45.538 user 0m12.530s 00:07:45.538 sys 0m1.606s 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.538 ************************************ 00:07:45.538 END TEST locking_app_on_unlocked_coremask 00:07:45.538 ************************************ 00:07:45.538 09:44:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:45.538 09:44:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.538 09:44:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.538 09:44:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.538 ************************************ 00:07:45.538 START TEST locking_app_on_locked_coremask 00:07:45.538 ************************************ 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59569 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59569 /var/tmp/spdk.sock 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59569 ']' 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.538 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.539 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.539 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.539 09:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.798 [2024-11-27 09:44:46.740747] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:45.798 [2024-11-27 09:44:46.740892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59569 ] 00:07:45.798 [2024-11-27 09:44:46.915050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.057 [2024-11-27 09:44:47.056237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59585 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59585 /var/tmp/spdk2.sock 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59585 /var/tmp/spdk2.sock 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59585 /var/tmp/spdk2.sock 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59585 ']' 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:46.996 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.997 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:46.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:46.997 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.997 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.257 [2024-11-27 09:44:48.139471] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:47.257 [2024-11-27 09:44:48.139751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59585 ] 00:07:47.257 [2024-11-27 09:44:48.323120] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59569 has claimed it. 00:07:47.257 [2024-11-27 09:44:48.323223] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:47.827 ERROR: process (pid: 59585) is no longer running 00:07:47.827 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59585) - No such process 00:07:47.827 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.827 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59569 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59569 00:07:47.828 09:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:48.088 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59569 00:07:48.088 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59569 ']' 00:07:48.088 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59569 00:07:48.088 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:48.088 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.088 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59569 00:07:48.346 killing process with pid 59569 00:07:48.346 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.346 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.346 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59569' 00:07:48.346 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59569 00:07:48.346 09:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59569 00:07:50.950 00:07:50.950 real 0m5.212s 00:07:50.950 user 0m5.207s 00:07:50.950 sys 0m1.009s 00:07:50.950 09:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.950 ************************************ 00:07:50.950 END TEST locking_app_on_locked_coremask 00:07:50.950 ************************************ 00:07:50.950 09:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.950 09:44:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:50.950 09:44:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.950 09:44:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.950 09:44:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:50.950 ************************************ 00:07:50.950 START TEST locking_overlapped_coremask 00:07:50.950 ************************************ 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59660 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59660 /var/tmp/spdk.sock 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59660 ']' 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.950 09:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.950 [2024-11-27 09:44:52.020614] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:50.950 [2024-11-27 09:44:52.020778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:07:51.210 [2024-11-27 09:44:52.190742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.469 [2024-11-27 09:44:52.340476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.469 [2024-11-27 09:44:52.340614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.469 [2024-11-27 09:44:52.340655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59678 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59678 /var/tmp/spdk2.sock 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59678 /var/tmp/spdk2.sock 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59678 /var/tmp/spdk2.sock 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59678 ']' 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.409 09:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.409 [2024-11-27 09:44:53.455969] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:52.409 [2024-11-27 09:44:53.456223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59678 ] 00:07:52.669 [2024-11-27 09:44:53.637297] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59660 has claimed it. 00:07:52.669 [2024-11-27 09:44:53.637400] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:53.238 ERROR: process (pid: 59678) is no longer running 00:07:53.238 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59678) - No such process 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.238 09:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59660 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59660 ']' 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59660 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59660 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.239 killing process with pid 59660 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59660' 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59660 00:07:53.239 09:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59660 00:07:55.777 00:07:55.777 real 0m4.920s 00:07:55.777 user 0m13.152s 00:07:55.777 sys 0m0.820s 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:55.777 ************************************ 00:07:55.777 END TEST locking_overlapped_coremask 00:07:55.777 ************************************ 00:07:55.777 09:44:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:55.777 09:44:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.777 09:44:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.777 09:44:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:55.777 ************************************ 00:07:55.777 START TEST locking_overlapped_coremask_via_rpc 00:07:55.777 ************************************ 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59753 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59753 /var/tmp/spdk.sock 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59753 ']' 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.777 09:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.036 [2024-11-27 09:44:57.005332] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:56.036 [2024-11-27 09:44:57.005551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59753 ] 00:07:56.296 [2024-11-27 09:44:57.182854] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:56.296 [2024-11-27 09:44:57.183043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:56.296 [2024-11-27 09:44:57.329174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.296 [2024-11-27 09:44:57.329327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.296 [2024-11-27 09:44:57.329367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59771 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59771 /var/tmp/spdk2.sock 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59771 ']' 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.235 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:57.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:57.236 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.236 09:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.495 [2024-11-27 09:44:58.450862] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:07:57.495 [2024-11-27 09:44:58.451148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:07:57.495 [2024-11-27 09:44:58.624930] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:57.754 [2024-11-27 09:44:58.629016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:58.013 [2024-11-27 09:44:58.933280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.013 [2024-11-27 09:44:58.933439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.013 [2024-11-27 09:44:58.933476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.917 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.917 [2024-11-27 09:45:01.037262] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59753 has claimed it. 00:08:00.175 request: 00:08:00.175 { 00:08:00.175 "method": "framework_enable_cpumask_locks", 00:08:00.175 "req_id": 1 00:08:00.175 } 00:08:00.175 Got JSON-RPC error response 00:08:00.175 response: 00:08:00.175 { 00:08:00.175 "code": -32603, 00:08:00.175 "message": "Failed to claim CPU core: 2" 00:08:00.175 } 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59753 /var/tmp/spdk.sock 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59753 ']' 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.175 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59771 /var/tmp/spdk2.sock 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59771 ']' 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.176 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:00.435 00:08:00.435 real 0m4.590s 00:08:00.435 user 0m1.289s 00:08:00.435 sys 0m0.235s 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.435 09:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:00.435 ************************************ 00:08:00.435 END TEST locking_overlapped_coremask_via_rpc 00:08:00.435 ************************************ 00:08:00.435 09:45:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:00.435 09:45:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59753 ]] 00:08:00.435 09:45:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59753 00:08:00.435 09:45:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59753 ']' 00:08:00.435 09:45:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59753 00:08:00.435 09:45:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:00.435 09:45:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.435 09:45:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59753 00:08:00.694 09:45:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.694 09:45:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.694 09:45:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59753' 00:08:00.694 killing process with pid 59753 00:08:00.694 09:45:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59753 00:08:00.694 09:45:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59753 00:08:03.226 09:45:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59771 ]] 00:08:03.226 09:45:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59771 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59771 ']' 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59771 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59771 00:08:03.226 killing process with pid 59771 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59771' 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59771 00:08:03.226 09:45:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59771 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:05.762 Process with pid 59753 is not found 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59753 ]] 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59753 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59753 ']' 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59753 00:08:05.762 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59753) - No such process 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59753 is not found' 00:08:05.762 Process with pid 59771 is not found 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59771 ]] 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59771 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59771 ']' 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59771 00:08:05.762 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59771) - No such process 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59771 is not found' 00:08:05.762 09:45:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:05.762 00:08:05.762 real 0m55.178s 00:08:05.762 user 1m31.055s 00:08:05.762 sys 0m8.697s 00:08:05.762 ************************************ 00:08:05.762 END TEST cpu_locks 00:08:05.762 ************************************ 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.762 09:45:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:05.762 ************************************ 00:08:05.762 END TEST event 00:08:05.762 ************************************ 00:08:05.762 00:08:05.762 real 1m27.796s 00:08:05.762 user 2m35.731s 00:08:05.762 sys 0m13.304s 00:08:05.762 09:45:06 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:05.762 09:45:06 event -- common/autotest_common.sh@10 -- # set +x 00:08:05.762 09:45:06 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:05.762 09:45:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:05.762 09:45:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:05.762 09:45:06 -- common/autotest_common.sh@10 -- # set +x 00:08:05.762 ************************************ 00:08:05.762 START TEST thread 00:08:05.762 ************************************ 00:08:05.762 09:45:06 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:06.023 * Looking for test storage... 00:08:06.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:06.023 09:45:06 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.023 09:45:06 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.023 09:45:06 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.023 09:45:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.023 09:45:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.023 09:45:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.023 09:45:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.023 09:45:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.023 09:45:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.023 09:45:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.023 09:45:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.023 09:45:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.023 09:45:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.023 09:45:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.023 09:45:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:06.023 09:45:07 thread -- scripts/common.sh@345 -- # : 1 00:08:06.023 09:45:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.023 09:45:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.023 09:45:07 thread -- scripts/common.sh@365 -- # decimal 1 00:08:06.023 09:45:07 thread -- scripts/common.sh@353 -- # local d=1 00:08:06.023 09:45:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.023 09:45:07 thread -- scripts/common.sh@355 -- # echo 1 00:08:06.023 09:45:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.023 09:45:07 thread -- scripts/common.sh@366 -- # decimal 2 00:08:06.023 09:45:07 thread -- scripts/common.sh@353 -- # local d=2 00:08:06.023 09:45:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.023 09:45:07 thread -- scripts/common.sh@355 -- # echo 2 00:08:06.023 09:45:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.023 09:45:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.023 09:45:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.023 09:45:07 thread -- scripts/common.sh@368 -- # return 0 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.023 --rc genhtml_branch_coverage=1 00:08:06.023 --rc genhtml_function_coverage=1 00:08:06.023 --rc genhtml_legend=1 00:08:06.023 --rc geninfo_all_blocks=1 00:08:06.023 --rc geninfo_unexecuted_blocks=1 00:08:06.023 00:08:06.023 ' 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.023 --rc genhtml_branch_coverage=1 00:08:06.023 --rc genhtml_function_coverage=1 00:08:06.023 --rc genhtml_legend=1 00:08:06.023 --rc geninfo_all_blocks=1 00:08:06.023 --rc geninfo_unexecuted_blocks=1 00:08:06.023 00:08:06.023 ' 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.023 --rc genhtml_branch_coverage=1 00:08:06.023 --rc genhtml_function_coverage=1 00:08:06.023 --rc genhtml_legend=1 00:08:06.023 --rc geninfo_all_blocks=1 00:08:06.023 --rc geninfo_unexecuted_blocks=1 00:08:06.023 00:08:06.023 ' 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.023 --rc genhtml_branch_coverage=1 00:08:06.023 --rc genhtml_function_coverage=1 00:08:06.023 --rc genhtml_legend=1 00:08:06.023 --rc geninfo_all_blocks=1 00:08:06.023 --rc geninfo_unexecuted_blocks=1 00:08:06.023 00:08:06.023 ' 00:08:06.023 09:45:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.023 09:45:07 thread -- common/autotest_common.sh@10 -- # set +x 00:08:06.023 ************************************ 00:08:06.023 START TEST thread_poller_perf 00:08:06.023 ************************************ 00:08:06.023 09:45:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:06.283 [2024-11-27 09:45:07.155371] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:06.283 [2024-11-27 09:45:07.155573] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59972 ] 00:08:06.283 [2024-11-27 09:45:07.331937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.542 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:06.542 [2024-11-27 09:45:07.475650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.946 [2024-11-27T09:45:09.079Z] ====================================== 00:08:07.946 [2024-11-27T09:45:09.079Z] busy:2298486960 (cyc) 00:08:07.946 [2024-11-27T09:45:09.079Z] total_run_count: 397000 00:08:07.946 [2024-11-27T09:45:09.079Z] tsc_hz: 2290000000 (cyc) 00:08:07.946 [2024-11-27T09:45:09.079Z] ====================================== 00:08:07.946 [2024-11-27T09:45:09.079Z] poller_cost: 5789 (cyc), 2527 (nsec) 00:08:07.946 ************************************ 00:08:07.946 END TEST thread_poller_perf 00:08:07.946 ************************************ 00:08:07.946 00:08:07.946 real 0m1.621s 00:08:07.946 user 0m1.400s 00:08:07.946 sys 0m0.114s 00:08:07.946 09:45:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.946 09:45:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.946 09:45:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:07.946 09:45:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:07.946 09:45:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.947 09:45:08 thread -- common/autotest_common.sh@10 -- # set +x 00:08:07.947 ************************************ 00:08:07.947 START TEST thread_poller_perf 00:08:07.947 ************************************ 00:08:07.947 09:45:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:07.947 [2024-11-27 09:45:08.847805] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:07.947 [2024-11-27 09:45:08.847934] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:08:07.947 [2024-11-27 09:45:09.010623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.243 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:08.243 [2024-11-27 09:45:09.156642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.625 [2024-11-27T09:45:10.758Z] ====================================== 00:08:09.625 [2024-11-27T09:45:10.758Z] busy:2294527528 (cyc) 00:08:09.625 [2024-11-27T09:45:10.758Z] total_run_count: 5347000 00:08:09.625 [2024-11-27T09:45:10.758Z] tsc_hz: 2290000000 (cyc) 00:08:09.625 [2024-11-27T09:45:10.758Z] ====================================== 00:08:09.625 [2024-11-27T09:45:10.758Z] poller_cost: 429 (cyc), 187 (nsec) 00:08:09.625 00:08:09.625 real 0m1.610s 00:08:09.625 user 0m1.384s 00:08:09.625 sys 0m0.119s 00:08:09.625 09:45:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.625 09:45:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 ************************************ 00:08:09.625 END TEST thread_poller_perf 00:08:09.625 ************************************ 00:08:09.625 09:45:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:09.625 00:08:09.625 real 0m3.603s 00:08:09.625 user 0m2.946s 00:08:09.625 sys 0m0.459s 00:08:09.625 09:45:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.625 09:45:10 thread -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 ************************************ 00:08:09.625 END TEST thread 00:08:09.625 ************************************ 00:08:09.625 09:45:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:09.625 09:45:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:09.625 09:45:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.625 09:45:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.625 09:45:10 -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 ************************************ 00:08:09.625 START TEST app_cmdline 00:08:09.625 ************************************ 00:08:09.625 09:45:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:09.625 * Looking for test storage... 00:08:09.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:09.625 09:45:10 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.625 09:45:10 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.625 09:45:10 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.625 09:45:10 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.626 09:45:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:09.886 09:45:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.886 09:45:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.886 09:45:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.886 09:45:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.886 --rc genhtml_branch_coverage=1 00:08:09.886 --rc genhtml_function_coverage=1 00:08:09.886 --rc genhtml_legend=1 00:08:09.886 --rc geninfo_all_blocks=1 00:08:09.886 --rc geninfo_unexecuted_blocks=1 00:08:09.886 00:08:09.886 ' 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.886 --rc genhtml_branch_coverage=1 00:08:09.886 --rc genhtml_function_coverage=1 00:08:09.886 --rc genhtml_legend=1 00:08:09.886 --rc geninfo_all_blocks=1 00:08:09.886 --rc geninfo_unexecuted_blocks=1 00:08:09.886 00:08:09.886 ' 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.886 --rc genhtml_branch_coverage=1 00:08:09.886 --rc genhtml_function_coverage=1 00:08:09.886 --rc genhtml_legend=1 00:08:09.886 --rc geninfo_all_blocks=1 00:08:09.886 --rc geninfo_unexecuted_blocks=1 00:08:09.886 00:08:09.886 ' 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.886 --rc genhtml_branch_coverage=1 00:08:09.886 --rc genhtml_function_coverage=1 00:08:09.886 --rc genhtml_legend=1 00:08:09.886 --rc geninfo_all_blocks=1 00:08:09.886 --rc geninfo_unexecuted_blocks=1 00:08:09.886 00:08:09.886 ' 00:08:09.886 09:45:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:09.886 09:45:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60097 00:08:09.886 09:45:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:09.886 09:45:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60097 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60097 ']' 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.886 09:45:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:09.886 [2024-11-27 09:45:10.851585] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:09.886 [2024-11-27 09:45:10.851790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60097 ] 00:08:10.146 [2024-11-27 09:45:11.025971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.146 [2024-11-27 09:45:11.168482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.090 09:45:12 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.090 09:45:12 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:11.090 09:45:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:11.350 { 00:08:11.350 "version": "SPDK v25.01-pre git sha1 597702889", 00:08:11.350 "fields": { 00:08:11.350 "major": 25, 00:08:11.350 "minor": 1, 00:08:11.350 "patch": 0, 00:08:11.350 "suffix": "-pre", 00:08:11.350 "commit": "597702889" 00:08:11.350 } 00:08:11.350 } 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:11.350 09:45:12 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:11.350 09:45:12 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:11.609 request: 00:08:11.609 { 00:08:11.609 "method": "env_dpdk_get_mem_stats", 00:08:11.609 "req_id": 1 00:08:11.609 } 00:08:11.609 Got JSON-RPC error response 00:08:11.609 response: 00:08:11.609 { 00:08:11.609 "code": -32601, 00:08:11.609 "message": "Method not found" 00:08:11.609 } 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:11.609 09:45:12 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60097 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60097 ']' 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60097 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60097 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60097' 00:08:11.609 killing process with pid 60097 00:08:11.609 09:45:12 app_cmdline -- common/autotest_common.sh@973 -- # kill 60097 00:08:11.610 09:45:12 app_cmdline -- common/autotest_common.sh@978 -- # wait 60097 00:08:14.897 ************************************ 00:08:14.897 END TEST app_cmdline 00:08:14.897 ************************************ 00:08:14.897 00:08:14.897 real 0m4.742s 00:08:14.897 user 0m4.773s 00:08:14.897 sys 0m0.794s 00:08:14.897 09:45:15 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.897 09:45:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:14.898 09:45:15 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:14.898 09:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.898 09:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.898 09:45:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.898 ************************************ 00:08:14.898 START TEST version 00:08:14.898 ************************************ 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:14.898 * Looking for test storage... 00:08:14.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.898 09:45:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.898 09:45:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.898 09:45:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.898 09:45:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.898 09:45:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.898 09:45:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.898 09:45:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.898 09:45:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.898 09:45:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.898 09:45:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.898 09:45:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.898 09:45:15 version -- scripts/common.sh@344 -- # case "$op" in 00:08:14.898 09:45:15 version -- scripts/common.sh@345 -- # : 1 00:08:14.898 09:45:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.898 09:45:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.898 09:45:15 version -- scripts/common.sh@365 -- # decimal 1 00:08:14.898 09:45:15 version -- scripts/common.sh@353 -- # local d=1 00:08:14.898 09:45:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.898 09:45:15 version -- scripts/common.sh@355 -- # echo 1 00:08:14.898 09:45:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.898 09:45:15 version -- scripts/common.sh@366 -- # decimal 2 00:08:14.898 09:45:15 version -- scripts/common.sh@353 -- # local d=2 00:08:14.898 09:45:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.898 09:45:15 version -- scripts/common.sh@355 -- # echo 2 00:08:14.898 09:45:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.898 09:45:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.898 09:45:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.898 09:45:15 version -- scripts/common.sh@368 -- # return 0 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.898 --rc genhtml_branch_coverage=1 00:08:14.898 --rc genhtml_function_coverage=1 00:08:14.898 --rc genhtml_legend=1 00:08:14.898 --rc geninfo_all_blocks=1 00:08:14.898 --rc geninfo_unexecuted_blocks=1 00:08:14.898 00:08:14.898 ' 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.898 --rc genhtml_branch_coverage=1 00:08:14.898 --rc genhtml_function_coverage=1 00:08:14.898 --rc genhtml_legend=1 00:08:14.898 --rc geninfo_all_blocks=1 00:08:14.898 --rc geninfo_unexecuted_blocks=1 00:08:14.898 00:08:14.898 ' 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.898 --rc genhtml_branch_coverage=1 00:08:14.898 --rc genhtml_function_coverage=1 00:08:14.898 --rc genhtml_legend=1 00:08:14.898 --rc geninfo_all_blocks=1 00:08:14.898 --rc geninfo_unexecuted_blocks=1 00:08:14.898 00:08:14.898 ' 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.898 --rc genhtml_branch_coverage=1 00:08:14.898 --rc genhtml_function_coverage=1 00:08:14.898 --rc genhtml_legend=1 00:08:14.898 --rc geninfo_all_blocks=1 00:08:14.898 --rc geninfo_unexecuted_blocks=1 00:08:14.898 00:08:14.898 ' 00:08:14.898 09:45:15 version -- app/version.sh@17 -- # get_header_version major 00:08:14.898 09:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # cut -f2 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.898 09:45:15 version -- app/version.sh@17 -- # major=25 00:08:14.898 09:45:15 version -- app/version.sh@18 -- # get_header_version minor 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # cut -f2 00:08:14.898 09:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.898 09:45:15 version -- app/version.sh@18 -- # minor=1 00:08:14.898 09:45:15 version -- app/version.sh@19 -- # get_header_version patch 00:08:14.898 09:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # cut -f2 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.898 09:45:15 version -- app/version.sh@19 -- # patch=0 00:08:14.898 09:45:15 version -- app/version.sh@20 -- # get_header_version suffix 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # cut -f2 00:08:14.898 09:45:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:14.898 09:45:15 version -- app/version.sh@14 -- # tr -d '"' 00:08:14.898 09:45:15 version -- app/version.sh@20 -- # suffix=-pre 00:08:14.898 09:45:15 version -- app/version.sh@22 -- # version=25.1 00:08:14.898 09:45:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:14.898 09:45:15 version -- app/version.sh@28 -- # version=25.1rc0 00:08:14.898 09:45:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:14.898 09:45:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:14.898 09:45:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:14.898 09:45:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:14.898 00:08:14.898 real 0m0.327s 00:08:14.898 user 0m0.190s 00:08:14.898 sys 0m0.193s 00:08:14.898 ************************************ 00:08:14.898 END TEST version 00:08:14.898 ************************************ 00:08:14.898 09:45:15 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.898 09:45:15 version -- common/autotest_common.sh@10 -- # set +x 00:08:14.898 09:45:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:14.898 09:45:15 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:14.898 09:45:15 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:14.898 09:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.898 09:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.898 09:45:15 -- common/autotest_common.sh@10 -- # set +x 00:08:14.898 ************************************ 00:08:14.898 START TEST bdev_raid 00:08:14.898 ************************************ 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:14.898 * Looking for test storage... 00:08:14.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:14.898 09:45:15 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.898 09:45:15 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:14.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.898 --rc genhtml_branch_coverage=1 00:08:14.899 --rc genhtml_function_coverage=1 00:08:14.899 --rc genhtml_legend=1 00:08:14.899 --rc geninfo_all_blocks=1 00:08:14.899 --rc geninfo_unexecuted_blocks=1 00:08:14.899 00:08:14.899 ' 00:08:14.899 09:45:15 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:14.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.899 --rc genhtml_branch_coverage=1 00:08:14.899 --rc genhtml_function_coverage=1 00:08:14.899 --rc genhtml_legend=1 00:08:14.899 --rc geninfo_all_blocks=1 00:08:14.899 --rc geninfo_unexecuted_blocks=1 00:08:14.899 00:08:14.899 ' 00:08:14.899 09:45:15 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:14.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.899 --rc genhtml_branch_coverage=1 00:08:14.899 --rc genhtml_function_coverage=1 00:08:14.899 --rc genhtml_legend=1 00:08:14.899 --rc geninfo_all_blocks=1 00:08:14.899 --rc geninfo_unexecuted_blocks=1 00:08:14.899 00:08:14.899 ' 00:08:14.899 09:45:15 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:14.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.899 --rc genhtml_branch_coverage=1 00:08:14.899 --rc genhtml_function_coverage=1 00:08:14.899 --rc genhtml_legend=1 00:08:14.899 --rc geninfo_all_blocks=1 00:08:14.899 --rc geninfo_unexecuted_blocks=1 00:08:14.899 00:08:14.899 ' 00:08:14.899 09:45:15 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:14.899 09:45:15 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:14.899 09:45:15 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:14.899 09:45:15 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:14.899 09:45:15 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:14.899 09:45:15 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:14.899 09:45:15 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:14.899 09:45:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.899 09:45:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.899 09:45:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.899 ************************************ 00:08:14.899 START TEST raid1_resize_data_offset_test 00:08:14.899 ************************************ 00:08:14.899 Process raid pid: 60290 00:08:14.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60290 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60290' 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60290 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 60290 ']' 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.899 09:45:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.158 [2024-11-27 09:45:16.080043] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:15.158 [2024-11-27 09:45:16.080238] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.158 [2024-11-27 09:45:16.235219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.417 [2024-11-27 09:45:16.371063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.676 [2024-11-27 09:45:16.603466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.676 [2024-11-27 09:45:16.603635] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.935 malloc0 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.935 09:45:16 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.194 malloc1 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.194 null0 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.194 [2024-11-27 09:45:17.108915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:16.194 [2024-11-27 09:45:17.111089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:16.194 [2024-11-27 09:45:17.111186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:16.194 [2024-11-27 09:45:17.111370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:16.194 [2024-11-27 09:45:17.111422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:16.194 [2024-11-27 09:45:17.111800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:16.194 [2024-11-27 09:45:17.112068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:16.194 [2024-11-27 09:45:17.112135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:16.194 [2024-11-27 09:45:17.112366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.194 [2024-11-27 09:45:17.168827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.194 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 malloc2 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 [2024-11-27 09:45:17.792515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:16.763 [2024-11-27 09:45:17.811741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 [2024-11-27 09:45:17.814071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60290 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 60290 ']' 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 60290 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.763 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60290 00:08:17.022 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:17.023 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:17.023 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60290' 00:08:17.023 killing process with pid 60290 00:08:17.023 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 60290 00:08:17.023 [2024-11-27 09:45:17.906297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.023 09:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 60290 00:08:17.023 [2024-11-27 09:45:17.907842] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:17.023 [2024-11-27 09:45:17.907914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.023 [2024-11-27 09:45:17.907934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:17.023 [2024-11-27 09:45:17.946235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.023 [2024-11-27 09:45:17.946657] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.023 [2024-11-27 09:45:17.946677] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:18.931 [2024-11-27 09:45:19.915337] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.312 ************************************ 00:08:20.312 END TEST raid1_resize_data_offset_test 00:08:20.312 ************************************ 00:08:20.312 09:45:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:20.312 00:08:20.312 real 0m5.175s 00:08:20.312 user 0m4.875s 00:08:20.312 sys 0m0.733s 00:08:20.312 09:45:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.312 09:45:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.312 09:45:21 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:20.312 09:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.312 09:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.312 09:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.312 ************************************ 00:08:20.312 START TEST raid0_resize_superblock_test 00:08:20.312 ************************************ 00:08:20.312 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:20.312 09:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:20.312 09:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60374 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60374' 00:08:20.313 Process raid pid: 60374 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60374 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60374 ']' 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.313 09:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.313 [2024-11-27 09:45:21.331655] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:20.313 [2024-11-27 09:45:21.331902] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.572 [2024-11-27 09:45:21.512486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.572 [2024-11-27 09:45:21.648582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.831 [2024-11-27 09:45:21.887030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.831 [2024-11-27 09:45:21.887073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.091 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.091 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:21.091 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:21.091 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.091 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.661 malloc0 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.661 [2024-11-27 09:45:22.769571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:21.661 [2024-11-27 09:45:22.769658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.661 [2024-11-27 09:45:22.769684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:21.661 [2024-11-27 09:45:22.769698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.661 [2024-11-27 09:45:22.772320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.661 [2024-11-27 09:45:22.772397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:21.661 pt0 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.661 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 6b5f6de5-dfca-47a0-8be6-b0f70d300954 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 368e0322-23aa-46c0-9332-a65c29d9fd5a 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 386c8d2d-4da1-4c14-b66e-697e1e3c2d9c 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 [2024-11-27 09:45:22.977529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 368e0322-23aa-46c0-9332-a65c29d9fd5a is claimed 00:08:21.921 [2024-11-27 09:45:22.977621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 386c8d2d-4da1-4c14-b66e-697e1e3c2d9c is claimed 00:08:21.921 [2024-11-27 09:45:22.977750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:21.921 [2024-11-27 09:45:22.977766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:21.921 [2024-11-27 09:45:22.978085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.921 [2024-11-27 09:45:22.978284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:21.921 [2024-11-27 09:45:22.978307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:21.921 [2024-11-27 09:45:22.978472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.921 09:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.921 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.921 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:21.921 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:21.921 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:21.921 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.921 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 [2024-11-27 09:45:23.097576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 [2024-11-27 09:45:23.141453] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:22.181 [2024-11-27 09:45:23.141482] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '368e0322-23aa-46c0-9332-a65c29d9fd5a' was resized: old size 131072, new size 204800 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 [2024-11-27 09:45:23.153325] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:22.181 [2024-11-27 09:45:23.153349] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '386c8d2d-4da1-4c14-b66e-697e1e3c2d9c' was resized: old size 131072, new size 204800 00:08:22.181 [2024-11-27 09:45:23.153378] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:22.181 [2024-11-27 09:45:23.265239] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.181 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.441 [2024-11-27 09:45:23.312943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:22.441 [2024-11-27 09:45:23.313021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:22.441 [2024-11-27 09:45:23.313037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.441 [2024-11-27 09:45:23.313052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:22.441 [2024-11-27 09:45:23.313163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.441 [2024-11-27 09:45:23.313203] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.441 [2024-11-27 09:45:23.313215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.441 [2024-11-27 09:45:23.324873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:22.441 [2024-11-27 09:45:23.324924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.441 [2024-11-27 09:45:23.324944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:22.441 [2024-11-27 09:45:23.324956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.441 [2024-11-27 09:45:23.327476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.441 [2024-11-27 09:45:23.327565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:22.441 [2024-11-27 09:45:23.329400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 368e0322-23aa-46c0-9332-a65c29d9fd5a 00:08:22.441 [2024-11-27 09:45:23.329487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 368e0322-23aa-46c0-9332-a65c29d9fd5a is claimed 00:08:22.441 [2024-11-27 09:45:23.329595] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 386c8d2d-4da1-4c14-b66e-697e1e3c2d9c 00:08:22.441 [2024-11-27 09:45:23.329614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 386c8d2d-4da1-4c14-b66e-697e1e3c2d9c is claimed 00:08:22.441 [2024-11-27 09:45:23.329765] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 386c8d2d-4da1-4c14-b66e-697e1e3c2d9c (2) smaller than existing raid bdev Raid (3) 00:08:22.441 [2024-11-27 09:45:23.329792] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 368e0322-23aa-46c0-9332-a65c29d9fd5a: File exists 00:08:22.441 [2024-11-27 09:45:23.329828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:22.441 [2024-11-27 09:45:23.329858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:22.441 [2024-11-27 09:45:23.330145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:22.441 pt0 00:08:22.441 [2024-11-27 09:45:23.330310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:22.441 [2024-11-27 09:45:23.330320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:22.441 [2024-11-27 09:45:23.330471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.441 [2024-11-27 09:45:23.353658] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60374 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60374 ']' 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60374 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60374 00:08:22.441 killing process with pid 60374 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60374' 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60374 00:08:22.441 [2024-11-27 09:45:23.424935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.441 [2024-11-27 09:45:23.425034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.441 [2024-11-27 09:45:23.425088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.441 [2024-11-27 09:45:23.425098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:22.441 09:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60374 00:08:24.348 [2024-11-27 09:45:24.996674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.286 09:45:26 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:25.286 00:08:25.286 real 0m5.013s 00:08:25.286 user 0m5.054s 00:08:25.286 sys 0m0.743s 00:08:25.286 09:45:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.286 ************************************ 00:08:25.286 END TEST raid0_resize_superblock_test 00:08:25.286 ************************************ 00:08:25.286 09:45:26 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.286 09:45:26 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:25.286 09:45:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:25.286 09:45:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.286 09:45:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.286 ************************************ 00:08:25.286 START TEST raid1_resize_superblock_test 00:08:25.286 ************************************ 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60478 00:08:25.286 Process raid pid: 60478 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60478' 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60478 00:08:25.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60478 ']' 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.286 09:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.286 [2024-11-27 09:45:26.414002] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:25.286 [2024-11-27 09:45:26.414207] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.545 [2024-11-27 09:45:26.589186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.803 [2024-11-27 09:45:26.732617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.063 [2024-11-27 09:45:26.965375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.063 [2024-11-27 09:45:26.965443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.322 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.322 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:26.322 09:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:26.322 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.322 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 malloc0 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 [2024-11-27 09:45:27.879986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:26.890 [2024-11-27 09:45:27.880086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.890 [2024-11-27 09:45:27.880112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.890 [2024-11-27 09:45:27.880127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.890 [2024-11-27 09:45:27.882687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.890 [2024-11-27 09:45:27.882766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:26.890 pt0 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.890 09:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 3c05416e-32ef-4a0c-a609-661de22b14bc 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 596d81c0-09f0-4125-adca-7c2c4ab54277 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 a68dc041-c078-4b69-aaf3-70f31eba0ab7 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 [2024-11-27 09:45:28.089261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 596d81c0-09f0-4125-adca-7c2c4ab54277 is claimed 00:08:27.149 [2024-11-27 09:45:28.089421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a68dc041-c078-4b69-aaf3-70f31eba0ab7 is claimed 00:08:27.149 [2024-11-27 09:45:28.089563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.149 [2024-11-27 09:45:28.089581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:27.149 [2024-11-27 09:45:28.089880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:27.149 [2024-11-27 09:45:28.090116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.149 [2024-11-27 09:45:28.090129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:27.149 [2024-11-27 09:45:28.090292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 [2024-11-27 09:45:28.201288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 [2024-11-27 09:45:28.233244] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:27.149 [2024-11-27 09:45:28.233316] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '596d81c0-09f0-4125-adca-7c2c4ab54277' was resized: old size 131072, new size 204800 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 [2024-11-27 09:45:28.245165] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:27.149 [2024-11-27 09:45:28.245190] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'a68dc041-c078-4b69-aaf3-70f31eba0ab7' was resized: old size 131072, new size 204800 00:08:27.149 [2024-11-27 09:45:28.245232] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.149 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 [2024-11-27 09:45:28.361069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 [2024-11-27 09:45:28.404787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:27.408 [2024-11-27 09:45:28.404922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:27.408 [2024-11-27 09:45:28.404989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:27.408 [2024-11-27 09:45:28.405209] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.408 [2024-11-27 09:45:28.405504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.408 [2024-11-27 09:45:28.405620] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.408 [2024-11-27 09:45:28.405678] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.408 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.408 [2024-11-27 09:45:28.416642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:27.408 [2024-11-27 09:45:28.416696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.408 [2024-11-27 09:45:28.416716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:27.409 [2024-11-27 09:45:28.416731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.409 [2024-11-27 09:45:28.419298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.409 [2024-11-27 09:45:28.419337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:27.409 [2024-11-27 09:45:28.421068] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 596d81c0-09f0-4125-adca-7c2c4ab54277 00:08:27.409 [2024-11-27 09:45:28.421147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 596d81c0-09f0-4125-adca-7c2c4ab54277 is claimed 00:08:27.409 [2024-11-27 09:45:28.421264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev a68dc041-c078-4b69-aaf3-70f31eba0ab7 00:08:27.409 [2024-11-27 09:45:28.421282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev a68dc041-c078-4b69-aaf3-70f31eba0ab7 is claimed 00:08:27.409 [2024-11-27 09:45:28.421445] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev a68dc041-c078-4b69-aaf3-70f31eba0ab7 (2) smaller than existing raid bdev Raid (3) 00:08:27.409 [2024-11-27 09:45:28.421469] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 596d81c0-09f0-4125-adca-7c2c4ab54277: File exists 00:08:27.409 [2024-11-27 09:45:28.421517] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:27.409 [2024-11-27 09:45:28.421547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:27.409 [2024-11-27 09:45:28.421810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:27.409 pt0 00:08:27.409 [2024-11-27 09:45:28.421983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:27.409 [2024-11-27 09:45:28.421993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:27.409 [2024-11-27 09:45:28.422161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:27.409 [2024-11-27 09:45:28.445330] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60478 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60478 ']' 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60478 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60478 00:08:27.409 killing process with pid 60478 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60478' 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60478 00:08:27.409 [2024-11-27 09:45:28.527647] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.409 [2024-11-27 09:45:28.527763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.409 09:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60478 00:08:27.409 [2024-11-27 09:45:28.527834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.409 [2024-11-27 09:45:28.527845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:29.310 [2024-11-27 09:45:30.063678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.275 ************************************ 00:08:30.275 END TEST raid1_resize_superblock_test 00:08:30.275 ************************************ 00:08:30.275 09:45:31 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:30.275 00:08:30.275 real 0m4.977s 00:08:30.275 user 0m5.035s 00:08:30.275 sys 0m0.733s 00:08:30.275 09:45:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.275 09:45:31 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.275 09:45:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:30.275 09:45:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:30.275 09:45:31 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:30.275 09:45:31 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:30.275 09:45:31 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:30.275 09:45:31 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:30.275 09:45:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.275 09:45:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.275 09:45:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.275 ************************************ 00:08:30.275 START TEST raid_function_test_raid0 00:08:30.275 ************************************ 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60586 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60586' 00:08:30.275 Process raid pid: 60586 00:08:30.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60586 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60586 ']' 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.275 09:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:30.535 [2024-11-27 09:45:31.477655] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:30.535 [2024-11-27 09:45:31.477877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.535 [2024-11-27 09:45:31.659940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.795 [2024-11-27 09:45:31.797245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.055 [2024-11-27 09:45:32.036989] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.055 [2024-11-27 09:45:32.037097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:31.314 Base_1 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:31.314 Base_2 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.314 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:31.314 [2024-11-27 09:45:32.425351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:31.314 [2024-11-27 09:45:32.427524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:31.314 [2024-11-27 09:45:32.427657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:31.314 [2024-11-27 09:45:32.427700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:31.314 [2024-11-27 09:45:32.427992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:31.314 [2024-11-27 09:45:32.428181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:31.314 [2024-11-27 09:45:32.428192] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:31.314 [2024-11-27 09:45:32.428358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.315 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.315 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:31.315 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.315 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:31.315 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.315 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.574 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:31.574 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:31.574 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:31.574 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:31.574 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:31.575 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:31.575 [2024-11-27 09:45:32.669039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:31.575 /dev/nbd0 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:31.835 1+0 records in 00:08:31.835 1+0 records out 00:08:31.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472365 s, 8.7 MB/s 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.835 { 00:08:31.835 "nbd_device": "/dev/nbd0", 00:08:31.835 "bdev_name": "raid" 00:08:31.835 } 00:08:31.835 ]' 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.835 09:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.835 { 00:08:31.835 "nbd_device": "/dev/nbd0", 00:08:31.835 "bdev_name": "raid" 00:08:31.835 } 00:08:31.835 ]' 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:32.096 4096+0 records in 00:08:32.096 4096+0 records out 00:08:32.096 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0296326 s, 70.8 MB/s 00:08:32.096 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:32.357 4096+0 records in 00:08:32.357 4096+0 records out 00:08:32.357 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.205417 s, 10.2 MB/s 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:32.357 128+0 records in 00:08:32.357 128+0 records out 00:08:32.357 65536 bytes (66 kB, 64 KiB) copied, 0.00031471 s, 208 MB/s 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:32.357 2035+0 records in 00:08:32.357 2035+0 records out 00:08:32.357 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00850868 s, 122 MB/s 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:32.357 456+0 records in 00:08:32.357 456+0 records out 00:08:32.357 233472 bytes (233 kB, 228 KiB) copied, 0.00395283 s, 59.1 MB/s 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.357 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.618 [2024-11-27 09:45:33.618593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:32.618 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60586 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60586 ']' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60586 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60586 00:08:32.878 killing process with pid 60586 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60586' 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60586 00:08:32.878 [2024-11-27 09:45:33.948693] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.878 [2024-11-27 09:45:33.948812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.878 09:45:33 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60586 00:08:32.878 [2024-11-27 09:45:33.948869] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.879 [2024-11-27 09:45:33.948886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:33.138 [2024-11-27 09:45:34.177919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.519 09:45:35 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:34.519 00:08:34.519 real 0m3.995s 00:08:34.519 user 0m4.483s 00:08:34.519 sys 0m1.088s 00:08:34.519 09:45:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.519 09:45:35 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:34.519 ************************************ 00:08:34.519 END TEST raid_function_test_raid0 00:08:34.519 ************************************ 00:08:34.519 09:45:35 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:34.519 09:45:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.519 09:45:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.519 09:45:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.519 ************************************ 00:08:34.519 START TEST raid_function_test_concat 00:08:34.519 ************************************ 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60715 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60715' 00:08:34.519 Process raid pid: 60715 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60715 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60715 ']' 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.519 09:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:34.519 [2024-11-27 09:45:35.548638] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:34.519 [2024-11-27 09:45:35.548797] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.779 [2024-11-27 09:45:35.722964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.779 [2024-11-27 09:45:35.860776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.037 [2024-11-27 09:45:36.089662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.037 [2024-11-27 09:45:36.089711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:35.296 Base_1 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.296 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:35.556 Base_2 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:35.556 [2024-11-27 09:45:36.476462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:35.556 [2024-11-27 09:45:36.478620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:35.556 [2024-11-27 09:45:36.478719] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:35.556 [2024-11-27 09:45:36.478732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:35.556 [2024-11-27 09:45:36.479013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:35.556 [2024-11-27 09:45:36.479173] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:35.556 [2024-11-27 09:45:36.479194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:35.556 [2024-11-27 09:45:36.479368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:35.556 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:35.815 [2024-11-27 09:45:36.704175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:35.815 /dev/nbd0 00:08:35.815 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:35.815 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:35.815 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:35.815 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:35.815 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:35.816 1+0 records in 00:08:35.816 1+0 records out 00:08:35.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377722 s, 10.8 MB/s 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:35.816 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:36.075 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:36.075 { 00:08:36.075 "nbd_device": "/dev/nbd0", 00:08:36.075 "bdev_name": "raid" 00:08:36.075 } 00:08:36.075 ]' 00:08:36.075 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.075 09:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:36.075 { 00:08:36.075 "nbd_device": "/dev/nbd0", 00:08:36.075 "bdev_name": "raid" 00:08:36.075 } 00:08:36.075 ]' 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:36.075 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:36.076 4096+0 records in 00:08:36.076 4096+0 records out 00:08:36.076 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.031977 s, 65.6 MB/s 00:08:36.076 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:36.335 4096+0 records in 00:08:36.335 4096+0 records out 00:08:36.335 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.187122 s, 11.2 MB/s 00:08:36.335 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:36.336 128+0 records in 00:08:36.336 128+0 records out 00:08:36.336 65536 bytes (66 kB, 64 KiB) copied, 0.00117536 s, 55.8 MB/s 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:36.336 2035+0 records in 00:08:36.336 2035+0 records out 00:08:36.336 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0150833 s, 69.1 MB/s 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:36.336 456+0 records in 00:08:36.336 456+0 records out 00:08:36.336 233472 bytes (233 kB, 228 KiB) copied, 0.00356324 s, 65.5 MB/s 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.336 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.595 [2024-11-27 09:45:37.589488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:36.595 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.855 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60715 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60715 ']' 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60715 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60715 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.856 killing process with pid 60715 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60715' 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60715 00:08:36.856 [2024-11-27 09:45:37.893211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:36.856 [2024-11-27 09:45:37.893335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.856 09:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60715 00:08:36.856 [2024-11-27 09:45:37.893401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.856 [2024-11-27 09:45:37.893415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:37.115 [2024-11-27 09:45:38.114067] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.496 09:45:39 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:38.496 00:08:38.496 real 0m3.852s 00:08:38.496 user 0m4.318s 00:08:38.496 sys 0m1.031s 00:08:38.496 09:45:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.496 09:45:39 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:38.496 ************************************ 00:08:38.496 END TEST raid_function_test_concat 00:08:38.496 ************************************ 00:08:38.496 09:45:39 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:38.496 09:45:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.496 09:45:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.496 09:45:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.496 ************************************ 00:08:38.496 START TEST raid0_resize_test 00:08:38.496 ************************************ 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60839 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60839' 00:08:38.496 Process raid pid: 60839 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60839 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60839 ']' 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.496 09:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.496 [2024-11-27 09:45:39.477677] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:38.496 [2024-11-27 09:45:39.477799] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.756 [2024-11-27 09:45:39.645988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.756 [2024-11-27 09:45:39.778551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.015 [2024-11-27 09:45:40.011418] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.015 [2024-11-27 09:45:40.011466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.304 Base_1 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.304 Base_2 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.304 [2024-11-27 09:45:40.327072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:39.304 [2024-11-27 09:45:40.329153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:39.304 [2024-11-27 09:45:40.329211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:39.304 [2024-11-27 09:45:40.329222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:39.304 [2024-11-27 09:45:40.329498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:39.304 [2024-11-27 09:45:40.329628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:39.304 [2024-11-27 09:45:40.329642] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:39.304 [2024-11-27 09:45:40.329781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.304 [2024-11-27 09:45:40.339006] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:39.304 [2024-11-27 09:45:40.339050] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:39.304 true 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.304 [2024-11-27 09:45:40.355167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.304 [2024-11-27 09:45:40.398881] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:39.304 [2024-11-27 09:45:40.398906] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:39.304 [2024-11-27 09:45:40.398937] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:39.304 true 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.304 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.601 [2024-11-27 09:45:40.415038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.601 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.601 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:39.601 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:39.601 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60839 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60839 ']' 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60839 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60839 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.602 killing process with pid 60839 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60839' 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60839 00:08:39.602 [2024-11-27 09:45:40.499070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.602 [2024-11-27 09:45:40.499157] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.602 [2024-11-27 09:45:40.499216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.602 [2024-11-27 09:45:40.499226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:39.602 09:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60839 00:08:39.602 [2024-11-27 09:45:40.517886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.985 09:45:41 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:40.985 00:08:40.985 real 0m2.341s 00:08:40.985 user 0m2.375s 00:08:40.985 sys 0m0.447s 00:08:40.985 09:45:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.985 09:45:41 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.985 ************************************ 00:08:40.985 END TEST raid0_resize_test 00:08:40.985 ************************************ 00:08:40.986 09:45:41 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:40.986 09:45:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.986 09:45:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.986 09:45:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.986 ************************************ 00:08:40.986 START TEST raid1_resize_test 00:08:40.986 ************************************ 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60900 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.986 Process raid pid: 60900 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60900' 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60900 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60900 ']' 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.986 09:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.986 [2024-11-27 09:45:41.886239] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:40.986 [2024-11-27 09:45:41.886392] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.986 [2024-11-27 09:45:42.067701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.246 [2024-11-27 09:45:42.199109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.505 [2024-11-27 09:45:42.432261] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.505 [2024-11-27 09:45:42.432307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 Base_1 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 Base_2 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.765 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.765 [2024-11-27 09:45:42.745136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:41.765 [2024-11-27 09:45:42.747214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:41.765 [2024-11-27 09:45:42.747275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:41.765 [2024-11-27 09:45:42.747287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:41.765 [2024-11-27 09:45:42.747540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:41.765 [2024-11-27 09:45:42.747677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:41.765 [2024-11-27 09:45:42.747687] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:41.766 [2024-11-27 09:45:42.747817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 [2024-11-27 09:45:42.757087] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:41.766 [2024-11-27 09:45:42.757118] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:41.766 true 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 [2024-11-27 09:45:42.773223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 [2024-11-27 09:45:42.816953] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:41.766 [2024-11-27 09:45:42.816977] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:41.766 [2024-11-27 09:45:42.817013] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:41.766 true 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.766 [2024-11-27 09:45:42.829103] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60900 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60900 ']' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60900 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.766 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60900 00:08:42.026 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.026 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.026 killing process with pid 60900 00:08:42.026 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60900' 00:08:42.026 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60900 00:08:42.026 [2024-11-27 09:45:42.920882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.026 [2024-11-27 09:45:42.920986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.026 09:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60900 00:08:42.026 [2024-11-27 09:45:42.921601] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.026 [2024-11-27 09:45:42.921630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:42.026 [2024-11-27 09:45:42.938934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.406 09:45:44 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:43.406 00:08:43.406 real 0m2.323s 00:08:43.406 user 0m2.394s 00:08:43.406 sys 0m0.418s 00:08:43.406 09:45:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.406 09:45:44 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.406 ************************************ 00:08:43.406 END TEST raid1_resize_test 00:08:43.406 ************************************ 00:08:43.406 09:45:44 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:43.406 09:45:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:43.406 09:45:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:43.406 09:45:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.406 09:45:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.406 09:45:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.406 ************************************ 00:08:43.406 START TEST raid_state_function_test 00:08:43.406 ************************************ 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60963 00:08:43.406 Process raid pid: 60963 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60963' 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60963 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60963 ']' 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.406 09:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.406 [2024-11-27 09:45:44.293060] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:43.406 [2024-11-27 09:45:44.293215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.406 [2024-11-27 09:45:44.474344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.665 [2024-11-27 09:45:44.603755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.924 [2024-11-27 09:45:44.844304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.924 [2024-11-27 09:45:44.844344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.184 [2024-11-27 09:45:45.141323] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.184 [2024-11-27 09:45:45.141392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.184 [2024-11-27 09:45:45.141404] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.184 [2024-11-27 09:45:45.141414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.184 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.184 "name": "Existed_Raid", 00:08:44.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.184 "strip_size_kb": 64, 00:08:44.184 "state": "configuring", 00:08:44.184 "raid_level": "raid0", 00:08:44.185 "superblock": false, 00:08:44.185 "num_base_bdevs": 2, 00:08:44.185 "num_base_bdevs_discovered": 0, 00:08:44.185 "num_base_bdevs_operational": 2, 00:08:44.185 "base_bdevs_list": [ 00:08:44.185 { 00:08:44.185 "name": "BaseBdev1", 00:08:44.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.185 "is_configured": false, 00:08:44.185 "data_offset": 0, 00:08:44.185 "data_size": 0 00:08:44.185 }, 00:08:44.185 { 00:08:44.185 "name": "BaseBdev2", 00:08:44.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.185 "is_configured": false, 00:08:44.185 "data_offset": 0, 00:08:44.185 "data_size": 0 00:08:44.185 } 00:08:44.185 ] 00:08:44.185 }' 00:08:44.185 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.185 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.444 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.444 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.444 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.704 [2024-11-27 09:45:45.576511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.704 [2024-11-27 09:45:45.576555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:44.704 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.704 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:44.704 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.704 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.704 [2024-11-27 09:45:45.588466] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.704 [2024-11-27 09:45:45.588512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.705 [2024-11-27 09:45:45.588522] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.705 [2024-11-27 09:45:45.588535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.705 [2024-11-27 09:45:45.641912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.705 BaseBdev1 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.705 [ 00:08:44.705 { 00:08:44.705 "name": "BaseBdev1", 00:08:44.705 "aliases": [ 00:08:44.705 "670c478d-ccbe-422d-9bcd-1c0e59f89db8" 00:08:44.705 ], 00:08:44.705 "product_name": "Malloc disk", 00:08:44.705 "block_size": 512, 00:08:44.705 "num_blocks": 65536, 00:08:44.705 "uuid": "670c478d-ccbe-422d-9bcd-1c0e59f89db8", 00:08:44.705 "assigned_rate_limits": { 00:08:44.705 "rw_ios_per_sec": 0, 00:08:44.705 "rw_mbytes_per_sec": 0, 00:08:44.705 "r_mbytes_per_sec": 0, 00:08:44.705 "w_mbytes_per_sec": 0 00:08:44.705 }, 00:08:44.705 "claimed": true, 00:08:44.705 "claim_type": "exclusive_write", 00:08:44.705 "zoned": false, 00:08:44.705 "supported_io_types": { 00:08:44.705 "read": true, 00:08:44.705 "write": true, 00:08:44.705 "unmap": true, 00:08:44.705 "flush": true, 00:08:44.705 "reset": true, 00:08:44.705 "nvme_admin": false, 00:08:44.705 "nvme_io": false, 00:08:44.705 "nvme_io_md": false, 00:08:44.705 "write_zeroes": true, 00:08:44.705 "zcopy": true, 00:08:44.705 "get_zone_info": false, 00:08:44.705 "zone_management": false, 00:08:44.705 "zone_append": false, 00:08:44.705 "compare": false, 00:08:44.705 "compare_and_write": false, 00:08:44.705 "abort": true, 00:08:44.705 "seek_hole": false, 00:08:44.705 "seek_data": false, 00:08:44.705 "copy": true, 00:08:44.705 "nvme_iov_md": false 00:08:44.705 }, 00:08:44.705 "memory_domains": [ 00:08:44.705 { 00:08:44.705 "dma_device_id": "system", 00:08:44.705 "dma_device_type": 1 00:08:44.705 }, 00:08:44.705 { 00:08:44.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.705 "dma_device_type": 2 00:08:44.705 } 00:08:44.705 ], 00:08:44.705 "driver_specific": {} 00:08:44.705 } 00:08:44.705 ] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.705 "name": "Existed_Raid", 00:08:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.705 "strip_size_kb": 64, 00:08:44.705 "state": "configuring", 00:08:44.705 "raid_level": "raid0", 00:08:44.705 "superblock": false, 00:08:44.705 "num_base_bdevs": 2, 00:08:44.705 "num_base_bdevs_discovered": 1, 00:08:44.705 "num_base_bdevs_operational": 2, 00:08:44.705 "base_bdevs_list": [ 00:08:44.705 { 00:08:44.705 "name": "BaseBdev1", 00:08:44.705 "uuid": "670c478d-ccbe-422d-9bcd-1c0e59f89db8", 00:08:44.705 "is_configured": true, 00:08:44.705 "data_offset": 0, 00:08:44.705 "data_size": 65536 00:08:44.705 }, 00:08:44.705 { 00:08:44.705 "name": "BaseBdev2", 00:08:44.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.705 "is_configured": false, 00:08:44.705 "data_offset": 0, 00:08:44.705 "data_size": 0 00:08:44.705 } 00:08:44.705 ] 00:08:44.705 }' 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.705 09:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.964 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.964 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.964 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.224 [2024-11-27 09:45:46.097191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.224 [2024-11-27 09:45:46.097257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.224 [2024-11-27 09:45:46.109207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.224 [2024-11-27 09:45:46.111410] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.224 [2024-11-27 09:45:46.111454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.224 "name": "Existed_Raid", 00:08:45.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.224 "strip_size_kb": 64, 00:08:45.224 "state": "configuring", 00:08:45.224 "raid_level": "raid0", 00:08:45.224 "superblock": false, 00:08:45.224 "num_base_bdevs": 2, 00:08:45.224 "num_base_bdevs_discovered": 1, 00:08:45.224 "num_base_bdevs_operational": 2, 00:08:45.224 "base_bdevs_list": [ 00:08:45.224 { 00:08:45.224 "name": "BaseBdev1", 00:08:45.224 "uuid": "670c478d-ccbe-422d-9bcd-1c0e59f89db8", 00:08:45.224 "is_configured": true, 00:08:45.224 "data_offset": 0, 00:08:45.224 "data_size": 65536 00:08:45.224 }, 00:08:45.224 { 00:08:45.224 "name": "BaseBdev2", 00:08:45.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.224 "is_configured": false, 00:08:45.224 "data_offset": 0, 00:08:45.224 "data_size": 0 00:08:45.224 } 00:08:45.224 ] 00:08:45.224 }' 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.224 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.484 [2024-11-27 09:45:46.580581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.484 [2024-11-27 09:45:46.580639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:45.484 [2024-11-27 09:45:46.580649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:45.484 [2024-11-27 09:45:46.580930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:45.484 [2024-11-27 09:45:46.581176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:45.484 [2024-11-27 09:45:46.581197] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:45.484 [2024-11-27 09:45:46.581483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.484 BaseBdev2 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.484 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.485 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.485 [ 00:08:45.485 { 00:08:45.485 "name": "BaseBdev2", 00:08:45.485 "aliases": [ 00:08:45.485 "38288613-05f3-4986-85b9-5c36c168969d" 00:08:45.485 ], 00:08:45.485 "product_name": "Malloc disk", 00:08:45.485 "block_size": 512, 00:08:45.485 "num_blocks": 65536, 00:08:45.485 "uuid": "38288613-05f3-4986-85b9-5c36c168969d", 00:08:45.485 "assigned_rate_limits": { 00:08:45.485 "rw_ios_per_sec": 0, 00:08:45.485 "rw_mbytes_per_sec": 0, 00:08:45.485 "r_mbytes_per_sec": 0, 00:08:45.485 "w_mbytes_per_sec": 0 00:08:45.485 }, 00:08:45.485 "claimed": true, 00:08:45.485 "claim_type": "exclusive_write", 00:08:45.485 "zoned": false, 00:08:45.485 "supported_io_types": { 00:08:45.485 "read": true, 00:08:45.485 "write": true, 00:08:45.485 "unmap": true, 00:08:45.485 "flush": true, 00:08:45.485 "reset": true, 00:08:45.485 "nvme_admin": false, 00:08:45.485 "nvme_io": false, 00:08:45.485 "nvme_io_md": false, 00:08:45.485 "write_zeroes": true, 00:08:45.485 "zcopy": true, 00:08:45.485 "get_zone_info": false, 00:08:45.485 "zone_management": false, 00:08:45.485 "zone_append": false, 00:08:45.485 "compare": false, 00:08:45.485 "compare_and_write": false, 00:08:45.485 "abort": true, 00:08:45.485 "seek_hole": false, 00:08:45.485 "seek_data": false, 00:08:45.485 "copy": true, 00:08:45.485 "nvme_iov_md": false 00:08:45.485 }, 00:08:45.485 "memory_domains": [ 00:08:45.485 { 00:08:45.485 "dma_device_id": "system", 00:08:45.485 "dma_device_type": 1 00:08:45.485 }, 00:08:45.485 { 00:08:45.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.745 "dma_device_type": 2 00:08:45.745 } 00:08:45.745 ], 00:08:45.745 "driver_specific": {} 00:08:45.745 } 00:08:45.745 ] 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.745 "name": "Existed_Raid", 00:08:45.745 "uuid": "653b43c8-410d-41c2-9a0d-361bfb0532f5", 00:08:45.745 "strip_size_kb": 64, 00:08:45.745 "state": "online", 00:08:45.745 "raid_level": "raid0", 00:08:45.745 "superblock": false, 00:08:45.745 "num_base_bdevs": 2, 00:08:45.745 "num_base_bdevs_discovered": 2, 00:08:45.745 "num_base_bdevs_operational": 2, 00:08:45.745 "base_bdevs_list": [ 00:08:45.745 { 00:08:45.745 "name": "BaseBdev1", 00:08:45.745 "uuid": "670c478d-ccbe-422d-9bcd-1c0e59f89db8", 00:08:45.745 "is_configured": true, 00:08:45.745 "data_offset": 0, 00:08:45.745 "data_size": 65536 00:08:45.745 }, 00:08:45.745 { 00:08:45.745 "name": "BaseBdev2", 00:08:45.745 "uuid": "38288613-05f3-4986-85b9-5c36c168969d", 00:08:45.745 "is_configured": true, 00:08:45.745 "data_offset": 0, 00:08:45.745 "data_size": 65536 00:08:45.745 } 00:08:45.745 ] 00:08:45.745 }' 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.745 09:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.005 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.005 09:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.005 [2024-11-27 09:45:47.008206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.005 "name": "Existed_Raid", 00:08:46.005 "aliases": [ 00:08:46.005 "653b43c8-410d-41c2-9a0d-361bfb0532f5" 00:08:46.005 ], 00:08:46.005 "product_name": "Raid Volume", 00:08:46.005 "block_size": 512, 00:08:46.005 "num_blocks": 131072, 00:08:46.005 "uuid": "653b43c8-410d-41c2-9a0d-361bfb0532f5", 00:08:46.005 "assigned_rate_limits": { 00:08:46.005 "rw_ios_per_sec": 0, 00:08:46.005 "rw_mbytes_per_sec": 0, 00:08:46.005 "r_mbytes_per_sec": 0, 00:08:46.005 "w_mbytes_per_sec": 0 00:08:46.005 }, 00:08:46.005 "claimed": false, 00:08:46.005 "zoned": false, 00:08:46.005 "supported_io_types": { 00:08:46.005 "read": true, 00:08:46.005 "write": true, 00:08:46.005 "unmap": true, 00:08:46.005 "flush": true, 00:08:46.005 "reset": true, 00:08:46.005 "nvme_admin": false, 00:08:46.005 "nvme_io": false, 00:08:46.005 "nvme_io_md": false, 00:08:46.005 "write_zeroes": true, 00:08:46.005 "zcopy": false, 00:08:46.005 "get_zone_info": false, 00:08:46.005 "zone_management": false, 00:08:46.005 "zone_append": false, 00:08:46.005 "compare": false, 00:08:46.005 "compare_and_write": false, 00:08:46.005 "abort": false, 00:08:46.005 "seek_hole": false, 00:08:46.005 "seek_data": false, 00:08:46.005 "copy": false, 00:08:46.005 "nvme_iov_md": false 00:08:46.005 }, 00:08:46.005 "memory_domains": [ 00:08:46.005 { 00:08:46.005 "dma_device_id": "system", 00:08:46.005 "dma_device_type": 1 00:08:46.005 }, 00:08:46.005 { 00:08:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.005 "dma_device_type": 2 00:08:46.005 }, 00:08:46.005 { 00:08:46.005 "dma_device_id": "system", 00:08:46.005 "dma_device_type": 1 00:08:46.005 }, 00:08:46.005 { 00:08:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.005 "dma_device_type": 2 00:08:46.005 } 00:08:46.005 ], 00:08:46.005 "driver_specific": { 00:08:46.005 "raid": { 00:08:46.005 "uuid": "653b43c8-410d-41c2-9a0d-361bfb0532f5", 00:08:46.005 "strip_size_kb": 64, 00:08:46.005 "state": "online", 00:08:46.005 "raid_level": "raid0", 00:08:46.005 "superblock": false, 00:08:46.005 "num_base_bdevs": 2, 00:08:46.005 "num_base_bdevs_discovered": 2, 00:08:46.005 "num_base_bdevs_operational": 2, 00:08:46.005 "base_bdevs_list": [ 00:08:46.005 { 00:08:46.005 "name": "BaseBdev1", 00:08:46.005 "uuid": "670c478d-ccbe-422d-9bcd-1c0e59f89db8", 00:08:46.005 "is_configured": true, 00:08:46.005 "data_offset": 0, 00:08:46.005 "data_size": 65536 00:08:46.005 }, 00:08:46.005 { 00:08:46.005 "name": "BaseBdev2", 00:08:46.005 "uuid": "38288613-05f3-4986-85b9-5c36c168969d", 00:08:46.005 "is_configured": true, 00:08:46.005 "data_offset": 0, 00:08:46.005 "data_size": 65536 00:08:46.005 } 00:08:46.005 ] 00:08:46.005 } 00:08:46.005 } 00:08:46.005 }' 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.005 BaseBdev2' 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.005 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.265 [2024-11-27 09:45:47.219576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.265 [2024-11-27 09:45:47.219618] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.265 [2024-11-27 09:45:47.219679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.265 "name": "Existed_Raid", 00:08:46.265 "uuid": "653b43c8-410d-41c2-9a0d-361bfb0532f5", 00:08:46.265 "strip_size_kb": 64, 00:08:46.265 "state": "offline", 00:08:46.265 "raid_level": "raid0", 00:08:46.265 "superblock": false, 00:08:46.265 "num_base_bdevs": 2, 00:08:46.265 "num_base_bdevs_discovered": 1, 00:08:46.265 "num_base_bdevs_operational": 1, 00:08:46.265 "base_bdevs_list": [ 00:08:46.265 { 00:08:46.265 "name": null, 00:08:46.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.265 "is_configured": false, 00:08:46.265 "data_offset": 0, 00:08:46.265 "data_size": 65536 00:08:46.265 }, 00:08:46.265 { 00:08:46.265 "name": "BaseBdev2", 00:08:46.265 "uuid": "38288613-05f3-4986-85b9-5c36c168969d", 00:08:46.265 "is_configured": true, 00:08:46.265 "data_offset": 0, 00:08:46.265 "data_size": 65536 00:08:46.265 } 00:08:46.265 ] 00:08:46.265 }' 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.265 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.835 [2024-11-27 09:45:47.714162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:46.835 [2024-11-27 09:45:47.714226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60963 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60963 ']' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60963 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60963 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.835 killing process with pid 60963 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60963' 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60963 00:08:46.835 [2024-11-27 09:45:47.902053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.835 09:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60963 00:08:46.835 [2024-11-27 09:45:47.919590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:48.248 00:08:48.248 real 0m4.919s 00:08:48.248 user 0m6.873s 00:08:48.248 sys 0m0.885s 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.248 ************************************ 00:08:48.248 END TEST raid_state_function_test 00:08:48.248 ************************************ 00:08:48.248 09:45:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:48.248 09:45:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.248 09:45:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.248 09:45:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:48.248 ************************************ 00:08:48.248 START TEST raid_state_function_test_sb 00:08:48.248 ************************************ 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61216 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61216' 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:48.248 Process raid pid: 61216 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61216 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61216 ']' 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.248 09:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.248 [2024-11-27 09:45:49.274133] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:48.248 [2024-11-27 09:45:49.274746] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.507 [2024-11-27 09:45:49.454634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.507 [2024-11-27 09:45:49.595524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.767 [2024-11-27 09:45:49.835500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:48.767 [2024-11-27 09:45:49.835553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.026 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.026 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:49.026 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.026 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.026 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.026 [2024-11-27 09:45:50.112467] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.026 [2024-11-27 09:45:50.112535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.026 [2024-11-27 09:45:50.112547] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.026 [2024-11-27 09:45:50.112558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.027 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.286 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.286 "name": "Existed_Raid", 00:08:49.286 "uuid": "39f2a407-1c16-4da7-9bc4-1df3a68671d6", 00:08:49.286 "strip_size_kb": 64, 00:08:49.286 "state": "configuring", 00:08:49.286 "raid_level": "raid0", 00:08:49.286 "superblock": true, 00:08:49.286 "num_base_bdevs": 2, 00:08:49.286 "num_base_bdevs_discovered": 0, 00:08:49.286 "num_base_bdevs_operational": 2, 00:08:49.286 "base_bdevs_list": [ 00:08:49.286 { 00:08:49.286 "name": "BaseBdev1", 00:08:49.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.286 "is_configured": false, 00:08:49.286 "data_offset": 0, 00:08:49.286 "data_size": 0 00:08:49.286 }, 00:08:49.286 { 00:08:49.286 "name": "BaseBdev2", 00:08:49.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.286 "is_configured": false, 00:08:49.286 "data_offset": 0, 00:08:49.286 "data_size": 0 00:08:49.286 } 00:08:49.286 ] 00:08:49.286 }' 00:08:49.286 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.286 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 [2024-11-27 09:45:50.559635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.546 [2024-11-27 09:45:50.559687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 [2024-11-27 09:45:50.571592] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:49.546 [2024-11-27 09:45:50.571654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:49.546 [2024-11-27 09:45:50.571663] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:49.546 [2024-11-27 09:45:50.571677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 [2024-11-27 09:45:50.624259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.546 BaseBdev1 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 [ 00:08:49.546 { 00:08:49.546 "name": "BaseBdev1", 00:08:49.546 "aliases": [ 00:08:49.546 "195ff4fa-078a-4e15-ab3d-e0b2cdb6fdcb" 00:08:49.546 ], 00:08:49.546 "product_name": "Malloc disk", 00:08:49.546 "block_size": 512, 00:08:49.546 "num_blocks": 65536, 00:08:49.546 "uuid": "195ff4fa-078a-4e15-ab3d-e0b2cdb6fdcb", 00:08:49.546 "assigned_rate_limits": { 00:08:49.546 "rw_ios_per_sec": 0, 00:08:49.546 "rw_mbytes_per_sec": 0, 00:08:49.546 "r_mbytes_per_sec": 0, 00:08:49.546 "w_mbytes_per_sec": 0 00:08:49.546 }, 00:08:49.546 "claimed": true, 00:08:49.546 "claim_type": "exclusive_write", 00:08:49.546 "zoned": false, 00:08:49.546 "supported_io_types": { 00:08:49.546 "read": true, 00:08:49.546 "write": true, 00:08:49.546 "unmap": true, 00:08:49.546 "flush": true, 00:08:49.546 "reset": true, 00:08:49.546 "nvme_admin": false, 00:08:49.546 "nvme_io": false, 00:08:49.546 "nvme_io_md": false, 00:08:49.546 "write_zeroes": true, 00:08:49.546 "zcopy": true, 00:08:49.546 "get_zone_info": false, 00:08:49.546 "zone_management": false, 00:08:49.546 "zone_append": false, 00:08:49.546 "compare": false, 00:08:49.546 "compare_and_write": false, 00:08:49.546 "abort": true, 00:08:49.546 "seek_hole": false, 00:08:49.546 "seek_data": false, 00:08:49.546 "copy": true, 00:08:49.546 "nvme_iov_md": false 00:08:49.546 }, 00:08:49.546 "memory_domains": [ 00:08:49.546 { 00:08:49.546 "dma_device_id": "system", 00:08:49.546 "dma_device_type": 1 00:08:49.546 }, 00:08:49.546 { 00:08:49.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.546 "dma_device_type": 2 00:08:49.546 } 00:08:49.546 ], 00:08:49.546 "driver_specific": {} 00:08:49.546 } 00:08:49.546 ] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.546 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.807 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.807 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.807 "name": "Existed_Raid", 00:08:49.807 "uuid": "3aa38ddc-d07c-4aff-b5f1-7ccff1fccad1", 00:08:49.807 "strip_size_kb": 64, 00:08:49.807 "state": "configuring", 00:08:49.807 "raid_level": "raid0", 00:08:49.807 "superblock": true, 00:08:49.807 "num_base_bdevs": 2, 00:08:49.807 "num_base_bdevs_discovered": 1, 00:08:49.807 "num_base_bdevs_operational": 2, 00:08:49.807 "base_bdevs_list": [ 00:08:49.807 { 00:08:49.807 "name": "BaseBdev1", 00:08:49.807 "uuid": "195ff4fa-078a-4e15-ab3d-e0b2cdb6fdcb", 00:08:49.807 "is_configured": true, 00:08:49.807 "data_offset": 2048, 00:08:49.807 "data_size": 63488 00:08:49.807 }, 00:08:49.807 { 00:08:49.807 "name": "BaseBdev2", 00:08:49.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.807 "is_configured": false, 00:08:49.807 "data_offset": 0, 00:08:49.807 "data_size": 0 00:08:49.807 } 00:08:49.807 ] 00:08:49.807 }' 00:08:49.807 09:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.807 09:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.067 [2024-11-27 09:45:51.075617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:50.067 [2024-11-27 09:45:51.075694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.067 [2024-11-27 09:45:51.087649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.067 [2024-11-27 09:45:51.089986] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:50.067 [2024-11-27 09:45:51.090045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.067 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.067 "name": "Existed_Raid", 00:08:50.067 "uuid": "6b509536-c180-459b-bdcb-a508c4d4f7da", 00:08:50.067 "strip_size_kb": 64, 00:08:50.067 "state": "configuring", 00:08:50.067 "raid_level": "raid0", 00:08:50.067 "superblock": true, 00:08:50.067 "num_base_bdevs": 2, 00:08:50.067 "num_base_bdevs_discovered": 1, 00:08:50.067 "num_base_bdevs_operational": 2, 00:08:50.067 "base_bdevs_list": [ 00:08:50.067 { 00:08:50.067 "name": "BaseBdev1", 00:08:50.067 "uuid": "195ff4fa-078a-4e15-ab3d-e0b2cdb6fdcb", 00:08:50.067 "is_configured": true, 00:08:50.067 "data_offset": 2048, 00:08:50.067 "data_size": 63488 00:08:50.067 }, 00:08:50.067 { 00:08:50.067 "name": "BaseBdev2", 00:08:50.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.067 "is_configured": false, 00:08:50.067 "data_offset": 0, 00:08:50.067 "data_size": 0 00:08:50.067 } 00:08:50.067 ] 00:08:50.067 }' 00:08:50.068 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.068 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.637 [2024-11-27 09:45:51.574391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.637 [2024-11-27 09:45:51.574710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:50.637 [2024-11-27 09:45:51.574726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:50.637 [2024-11-27 09:45:51.575030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:50.637 [2024-11-27 09:45:51.575220] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:50.637 [2024-11-27 09:45:51.575243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:50.637 [2024-11-27 09:45:51.575401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.637 BaseBdev2 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.637 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.638 [ 00:08:50.638 { 00:08:50.638 "name": "BaseBdev2", 00:08:50.638 "aliases": [ 00:08:50.638 "9e0f4765-12ee-4001-bb51-103e4ef16595" 00:08:50.638 ], 00:08:50.638 "product_name": "Malloc disk", 00:08:50.638 "block_size": 512, 00:08:50.638 "num_blocks": 65536, 00:08:50.638 "uuid": "9e0f4765-12ee-4001-bb51-103e4ef16595", 00:08:50.638 "assigned_rate_limits": { 00:08:50.638 "rw_ios_per_sec": 0, 00:08:50.638 "rw_mbytes_per_sec": 0, 00:08:50.638 "r_mbytes_per_sec": 0, 00:08:50.638 "w_mbytes_per_sec": 0 00:08:50.638 }, 00:08:50.638 "claimed": true, 00:08:50.638 "claim_type": "exclusive_write", 00:08:50.638 "zoned": false, 00:08:50.638 "supported_io_types": { 00:08:50.638 "read": true, 00:08:50.638 "write": true, 00:08:50.638 "unmap": true, 00:08:50.638 "flush": true, 00:08:50.638 "reset": true, 00:08:50.638 "nvme_admin": false, 00:08:50.638 "nvme_io": false, 00:08:50.638 "nvme_io_md": false, 00:08:50.638 "write_zeroes": true, 00:08:50.638 "zcopy": true, 00:08:50.638 "get_zone_info": false, 00:08:50.638 "zone_management": false, 00:08:50.638 "zone_append": false, 00:08:50.638 "compare": false, 00:08:50.638 "compare_and_write": false, 00:08:50.638 "abort": true, 00:08:50.638 "seek_hole": false, 00:08:50.638 "seek_data": false, 00:08:50.638 "copy": true, 00:08:50.638 "nvme_iov_md": false 00:08:50.638 }, 00:08:50.638 "memory_domains": [ 00:08:50.638 { 00:08:50.638 "dma_device_id": "system", 00:08:50.638 "dma_device_type": 1 00:08:50.638 }, 00:08:50.638 { 00:08:50.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.638 "dma_device_type": 2 00:08:50.638 } 00:08:50.638 ], 00:08:50.638 "driver_specific": {} 00:08:50.638 } 00:08:50.638 ] 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.638 "name": "Existed_Raid", 00:08:50.638 "uuid": "6b509536-c180-459b-bdcb-a508c4d4f7da", 00:08:50.638 "strip_size_kb": 64, 00:08:50.638 "state": "online", 00:08:50.638 "raid_level": "raid0", 00:08:50.638 "superblock": true, 00:08:50.638 "num_base_bdevs": 2, 00:08:50.638 "num_base_bdevs_discovered": 2, 00:08:50.638 "num_base_bdevs_operational": 2, 00:08:50.638 "base_bdevs_list": [ 00:08:50.638 { 00:08:50.638 "name": "BaseBdev1", 00:08:50.638 "uuid": "195ff4fa-078a-4e15-ab3d-e0b2cdb6fdcb", 00:08:50.638 "is_configured": true, 00:08:50.638 "data_offset": 2048, 00:08:50.638 "data_size": 63488 00:08:50.638 }, 00:08:50.638 { 00:08:50.638 "name": "BaseBdev2", 00:08:50.638 "uuid": "9e0f4765-12ee-4001-bb51-103e4ef16595", 00:08:50.638 "is_configured": true, 00:08:50.638 "data_offset": 2048, 00:08:50.638 "data_size": 63488 00:08:50.638 } 00:08:50.638 ] 00:08:50.638 }' 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.638 09:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.208 [2024-11-27 09:45:52.081828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.208 "name": "Existed_Raid", 00:08:51.208 "aliases": [ 00:08:51.208 "6b509536-c180-459b-bdcb-a508c4d4f7da" 00:08:51.208 ], 00:08:51.208 "product_name": "Raid Volume", 00:08:51.208 "block_size": 512, 00:08:51.208 "num_blocks": 126976, 00:08:51.208 "uuid": "6b509536-c180-459b-bdcb-a508c4d4f7da", 00:08:51.208 "assigned_rate_limits": { 00:08:51.208 "rw_ios_per_sec": 0, 00:08:51.208 "rw_mbytes_per_sec": 0, 00:08:51.208 "r_mbytes_per_sec": 0, 00:08:51.208 "w_mbytes_per_sec": 0 00:08:51.208 }, 00:08:51.208 "claimed": false, 00:08:51.208 "zoned": false, 00:08:51.208 "supported_io_types": { 00:08:51.208 "read": true, 00:08:51.208 "write": true, 00:08:51.208 "unmap": true, 00:08:51.208 "flush": true, 00:08:51.208 "reset": true, 00:08:51.208 "nvme_admin": false, 00:08:51.208 "nvme_io": false, 00:08:51.208 "nvme_io_md": false, 00:08:51.208 "write_zeroes": true, 00:08:51.208 "zcopy": false, 00:08:51.208 "get_zone_info": false, 00:08:51.208 "zone_management": false, 00:08:51.208 "zone_append": false, 00:08:51.208 "compare": false, 00:08:51.208 "compare_and_write": false, 00:08:51.208 "abort": false, 00:08:51.208 "seek_hole": false, 00:08:51.208 "seek_data": false, 00:08:51.208 "copy": false, 00:08:51.208 "nvme_iov_md": false 00:08:51.208 }, 00:08:51.208 "memory_domains": [ 00:08:51.208 { 00:08:51.208 "dma_device_id": "system", 00:08:51.208 "dma_device_type": 1 00:08:51.208 }, 00:08:51.208 { 00:08:51.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.208 "dma_device_type": 2 00:08:51.208 }, 00:08:51.208 { 00:08:51.208 "dma_device_id": "system", 00:08:51.208 "dma_device_type": 1 00:08:51.208 }, 00:08:51.208 { 00:08:51.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.208 "dma_device_type": 2 00:08:51.208 } 00:08:51.208 ], 00:08:51.208 "driver_specific": { 00:08:51.208 "raid": { 00:08:51.208 "uuid": "6b509536-c180-459b-bdcb-a508c4d4f7da", 00:08:51.208 "strip_size_kb": 64, 00:08:51.208 "state": "online", 00:08:51.208 "raid_level": "raid0", 00:08:51.208 "superblock": true, 00:08:51.208 "num_base_bdevs": 2, 00:08:51.208 "num_base_bdevs_discovered": 2, 00:08:51.208 "num_base_bdevs_operational": 2, 00:08:51.208 "base_bdevs_list": [ 00:08:51.208 { 00:08:51.208 "name": "BaseBdev1", 00:08:51.208 "uuid": "195ff4fa-078a-4e15-ab3d-e0b2cdb6fdcb", 00:08:51.208 "is_configured": true, 00:08:51.208 "data_offset": 2048, 00:08:51.208 "data_size": 63488 00:08:51.208 }, 00:08:51.208 { 00:08:51.208 "name": "BaseBdev2", 00:08:51.208 "uuid": "9e0f4765-12ee-4001-bb51-103e4ef16595", 00:08:51.208 "is_configured": true, 00:08:51.208 "data_offset": 2048, 00:08:51.208 "data_size": 63488 00:08:51.208 } 00:08:51.208 ] 00:08:51.208 } 00:08:51.208 } 00:08:51.208 }' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:51.208 BaseBdev2' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.208 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.208 [2024-11-27 09:45:52.317202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.208 [2024-11-27 09:45:52.317290] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.208 [2024-11-27 09:45:52.317369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.469 "name": "Existed_Raid", 00:08:51.469 "uuid": "6b509536-c180-459b-bdcb-a508c4d4f7da", 00:08:51.469 "strip_size_kb": 64, 00:08:51.469 "state": "offline", 00:08:51.469 "raid_level": "raid0", 00:08:51.469 "superblock": true, 00:08:51.469 "num_base_bdevs": 2, 00:08:51.469 "num_base_bdevs_discovered": 1, 00:08:51.469 "num_base_bdevs_operational": 1, 00:08:51.469 "base_bdevs_list": [ 00:08:51.469 { 00:08:51.469 "name": null, 00:08:51.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.469 "is_configured": false, 00:08:51.469 "data_offset": 0, 00:08:51.469 "data_size": 63488 00:08:51.469 }, 00:08:51.469 { 00:08:51.469 "name": "BaseBdev2", 00:08:51.469 "uuid": "9e0f4765-12ee-4001-bb51-103e4ef16595", 00:08:51.469 "is_configured": true, 00:08:51.469 "data_offset": 2048, 00:08:51.469 "data_size": 63488 00:08:51.469 } 00:08:51.469 ] 00:08:51.469 }' 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.469 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.729 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:51.729 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.729 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.729 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:51.729 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.729 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.989 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.989 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:51.989 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:51.989 09:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:51.989 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.989 09:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.989 [2024-11-27 09:45:52.896081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.989 [2024-11-27 09:45:52.896198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61216 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61216 ']' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61216 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61216 00:08:51.989 killing process with pid 61216 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61216' 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61216 00:08:51.989 [2024-11-27 09:45:53.094317] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:51.989 09:45:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61216 00:08:51.989 [2024-11-27 09:45:53.111350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.371 09:45:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.371 00:08:53.371 real 0m5.133s 00:08:53.371 user 0m7.269s 00:08:53.371 sys 0m0.906s 00:08:53.371 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.371 ************************************ 00:08:53.371 END TEST raid_state_function_test_sb 00:08:53.371 ************************************ 00:08:53.371 09:45:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.371 09:45:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:53.371 09:45:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:53.371 09:45:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.371 09:45:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.371 ************************************ 00:08:53.371 START TEST raid_superblock_test 00:08:53.371 ************************************ 00:08:53.371 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:53.371 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:53.371 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61463 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61463 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61463 ']' 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.372 09:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.372 [2024-11-27 09:45:54.474241] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:53.372 [2024-11-27 09:45:54.474470] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61463 ] 00:08:53.632 [2024-11-27 09:45:54.654165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.891 [2024-11-27 09:45:54.790859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.891 [2024-11-27 09:45:55.017379] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.891 [2024-11-27 09:45:55.017558] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.461 malloc1 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.461 [2024-11-27 09:45:55.347071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.461 [2024-11-27 09:45:55.347139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.461 [2024-11-27 09:45:55.347179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:54.461 [2024-11-27 09:45:55.347189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.461 [2024-11-27 09:45:55.349606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.461 [2024-11-27 09:45:55.349644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.461 pt1 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.461 malloc2 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.461 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.462 [2024-11-27 09:45:55.407038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.462 [2024-11-27 09:45:55.407168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.462 [2024-11-27 09:45:55.407215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:54.462 [2024-11-27 09:45:55.407262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.462 [2024-11-27 09:45:55.409699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.462 [2024-11-27 09:45:55.409788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.462 pt2 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.462 [2024-11-27 09:45:55.419077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.462 [2024-11-27 09:45:55.421229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.462 [2024-11-27 09:45:55.421463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:54.462 [2024-11-27 09:45:55.421507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:54.462 [2024-11-27 09:45:55.421772] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:54.462 [2024-11-27 09:45:55.421958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:54.462 [2024-11-27 09:45:55.422008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:54.462 [2024-11-27 09:45:55.422198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.462 "name": "raid_bdev1", 00:08:54.462 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:54.462 "strip_size_kb": 64, 00:08:54.462 "state": "online", 00:08:54.462 "raid_level": "raid0", 00:08:54.462 "superblock": true, 00:08:54.462 "num_base_bdevs": 2, 00:08:54.462 "num_base_bdevs_discovered": 2, 00:08:54.462 "num_base_bdevs_operational": 2, 00:08:54.462 "base_bdevs_list": [ 00:08:54.462 { 00:08:54.462 "name": "pt1", 00:08:54.462 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.462 "is_configured": true, 00:08:54.462 "data_offset": 2048, 00:08:54.462 "data_size": 63488 00:08:54.462 }, 00:08:54.462 { 00:08:54.462 "name": "pt2", 00:08:54.462 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.462 "is_configured": true, 00:08:54.462 "data_offset": 2048, 00:08:54.462 "data_size": 63488 00:08:54.462 } 00:08:54.462 ] 00:08:54.462 }' 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.462 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.722 [2024-11-27 09:45:55.798656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.722 "name": "raid_bdev1", 00:08:54.722 "aliases": [ 00:08:54.722 "5788e98a-0846-4527-b84a-947e3d58c7bc" 00:08:54.722 ], 00:08:54.722 "product_name": "Raid Volume", 00:08:54.722 "block_size": 512, 00:08:54.722 "num_blocks": 126976, 00:08:54.722 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:54.722 "assigned_rate_limits": { 00:08:54.722 "rw_ios_per_sec": 0, 00:08:54.722 "rw_mbytes_per_sec": 0, 00:08:54.722 "r_mbytes_per_sec": 0, 00:08:54.722 "w_mbytes_per_sec": 0 00:08:54.722 }, 00:08:54.722 "claimed": false, 00:08:54.722 "zoned": false, 00:08:54.722 "supported_io_types": { 00:08:54.722 "read": true, 00:08:54.722 "write": true, 00:08:54.722 "unmap": true, 00:08:54.722 "flush": true, 00:08:54.722 "reset": true, 00:08:54.722 "nvme_admin": false, 00:08:54.722 "nvme_io": false, 00:08:54.722 "nvme_io_md": false, 00:08:54.722 "write_zeroes": true, 00:08:54.722 "zcopy": false, 00:08:54.722 "get_zone_info": false, 00:08:54.722 "zone_management": false, 00:08:54.722 "zone_append": false, 00:08:54.722 "compare": false, 00:08:54.722 "compare_and_write": false, 00:08:54.722 "abort": false, 00:08:54.722 "seek_hole": false, 00:08:54.722 "seek_data": false, 00:08:54.722 "copy": false, 00:08:54.722 "nvme_iov_md": false 00:08:54.722 }, 00:08:54.722 "memory_domains": [ 00:08:54.722 { 00:08:54.722 "dma_device_id": "system", 00:08:54.722 "dma_device_type": 1 00:08:54.722 }, 00:08:54.722 { 00:08:54.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.722 "dma_device_type": 2 00:08:54.722 }, 00:08:54.722 { 00:08:54.722 "dma_device_id": "system", 00:08:54.722 "dma_device_type": 1 00:08:54.722 }, 00:08:54.722 { 00:08:54.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.722 "dma_device_type": 2 00:08:54.722 } 00:08:54.722 ], 00:08:54.722 "driver_specific": { 00:08:54.722 "raid": { 00:08:54.722 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:54.722 "strip_size_kb": 64, 00:08:54.722 "state": "online", 00:08:54.722 "raid_level": "raid0", 00:08:54.722 "superblock": true, 00:08:54.722 "num_base_bdevs": 2, 00:08:54.722 "num_base_bdevs_discovered": 2, 00:08:54.722 "num_base_bdevs_operational": 2, 00:08:54.722 "base_bdevs_list": [ 00:08:54.722 { 00:08:54.722 "name": "pt1", 00:08:54.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.722 "is_configured": true, 00:08:54.722 "data_offset": 2048, 00:08:54.722 "data_size": 63488 00:08:54.722 }, 00:08:54.722 { 00:08:54.722 "name": "pt2", 00:08:54.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.722 "is_configured": true, 00:08:54.722 "data_offset": 2048, 00:08:54.722 "data_size": 63488 00:08:54.722 } 00:08:54.722 ] 00:08:54.722 } 00:08:54.722 } 00:08:54.722 }' 00:08:54.722 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:54.983 pt2' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:54.983 [2024-11-27 09:45:55.986307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.983 09:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5788e98a-0846-4527-b84a-947e3d58c7bc 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5788e98a-0846-4527-b84a-947e3d58c7bc ']' 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.983 [2024-11-27 09:45:56.033940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.983 [2024-11-27 09:45:56.034025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.983 [2024-11-27 09:45:56.034119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.983 [2024-11-27 09:45:56.034171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.983 [2024-11-27 09:45:56.034184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:54.983 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.984 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.244 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.244 [2024-11-27 09:45:56.173736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:55.244 [2024-11-27 09:45:56.175934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:55.244 [2024-11-27 09:45:56.176074] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:55.244 [2024-11-27 09:45:56.176179] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:55.244 [2024-11-27 09:45:56.176231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.244 [2024-11-27 09:45:56.176257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:55.244 request: 00:08:55.244 { 00:08:55.244 "name": "raid_bdev1", 00:08:55.244 "raid_level": "raid0", 00:08:55.244 "base_bdevs": [ 00:08:55.244 "malloc1", 00:08:55.244 "malloc2" 00:08:55.244 ], 00:08:55.244 "strip_size_kb": 64, 00:08:55.244 "superblock": false, 00:08:55.244 "method": "bdev_raid_create", 00:08:55.244 "req_id": 1 00:08:55.244 } 00:08:55.244 Got JSON-RPC error response 00:08:55.244 response: 00:08:55.245 { 00:08:55.245 "code": -17, 00:08:55.245 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:55.245 } 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.245 [2024-11-27 09:45:56.237603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.245 [2024-11-27 09:45:56.237713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.245 [2024-11-27 09:45:56.237747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:55.245 [2024-11-27 09:45:56.237776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.245 [2024-11-27 09:45:56.240301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.245 [2024-11-27 09:45:56.240390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.245 [2024-11-27 09:45:56.240486] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:55.245 [2024-11-27 09:45:56.240572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.245 pt1 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.245 "name": "raid_bdev1", 00:08:55.245 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:55.245 "strip_size_kb": 64, 00:08:55.245 "state": "configuring", 00:08:55.245 "raid_level": "raid0", 00:08:55.245 "superblock": true, 00:08:55.245 "num_base_bdevs": 2, 00:08:55.245 "num_base_bdevs_discovered": 1, 00:08:55.245 "num_base_bdevs_operational": 2, 00:08:55.245 "base_bdevs_list": [ 00:08:55.245 { 00:08:55.245 "name": "pt1", 00:08:55.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.245 "is_configured": true, 00:08:55.245 "data_offset": 2048, 00:08:55.245 "data_size": 63488 00:08:55.245 }, 00:08:55.245 { 00:08:55.245 "name": null, 00:08:55.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.245 "is_configured": false, 00:08:55.245 "data_offset": 2048, 00:08:55.245 "data_size": 63488 00:08:55.245 } 00:08:55.245 ] 00:08:55.245 }' 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.245 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.815 [2024-11-27 09:45:56.680932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.815 [2024-11-27 09:45:56.681110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.815 [2024-11-27 09:45:56.681142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:55.815 [2024-11-27 09:45:56.681156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.815 [2024-11-27 09:45:56.681752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.815 [2024-11-27 09:45:56.681777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.815 [2024-11-27 09:45:56.681883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:55.815 [2024-11-27 09:45:56.681917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.815 [2024-11-27 09:45:56.682087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:55.815 [2024-11-27 09:45:56.682107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:55.815 [2024-11-27 09:45:56.682410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.815 [2024-11-27 09:45:56.682585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:55.815 [2024-11-27 09:45:56.682601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:55.815 [2024-11-27 09:45:56.682774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.815 pt2 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.815 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.816 "name": "raid_bdev1", 00:08:55.816 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:55.816 "strip_size_kb": 64, 00:08:55.816 "state": "online", 00:08:55.816 "raid_level": "raid0", 00:08:55.816 "superblock": true, 00:08:55.816 "num_base_bdevs": 2, 00:08:55.816 "num_base_bdevs_discovered": 2, 00:08:55.816 "num_base_bdevs_operational": 2, 00:08:55.816 "base_bdevs_list": [ 00:08:55.816 { 00:08:55.816 "name": "pt1", 00:08:55.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.816 "is_configured": true, 00:08:55.816 "data_offset": 2048, 00:08:55.816 "data_size": 63488 00:08:55.816 }, 00:08:55.816 { 00:08:55.816 "name": "pt2", 00:08:55.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.816 "is_configured": true, 00:08:55.816 "data_offset": 2048, 00:08:55.816 "data_size": 63488 00:08:55.816 } 00:08:55.816 ] 00:08:55.816 }' 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.816 09:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.076 [2024-11-27 09:45:57.132425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.076 "name": "raid_bdev1", 00:08:56.076 "aliases": [ 00:08:56.076 "5788e98a-0846-4527-b84a-947e3d58c7bc" 00:08:56.076 ], 00:08:56.076 "product_name": "Raid Volume", 00:08:56.076 "block_size": 512, 00:08:56.076 "num_blocks": 126976, 00:08:56.076 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:56.076 "assigned_rate_limits": { 00:08:56.076 "rw_ios_per_sec": 0, 00:08:56.076 "rw_mbytes_per_sec": 0, 00:08:56.076 "r_mbytes_per_sec": 0, 00:08:56.076 "w_mbytes_per_sec": 0 00:08:56.076 }, 00:08:56.076 "claimed": false, 00:08:56.076 "zoned": false, 00:08:56.076 "supported_io_types": { 00:08:56.076 "read": true, 00:08:56.076 "write": true, 00:08:56.076 "unmap": true, 00:08:56.076 "flush": true, 00:08:56.076 "reset": true, 00:08:56.076 "nvme_admin": false, 00:08:56.076 "nvme_io": false, 00:08:56.076 "nvme_io_md": false, 00:08:56.076 "write_zeroes": true, 00:08:56.076 "zcopy": false, 00:08:56.076 "get_zone_info": false, 00:08:56.076 "zone_management": false, 00:08:56.076 "zone_append": false, 00:08:56.076 "compare": false, 00:08:56.076 "compare_and_write": false, 00:08:56.076 "abort": false, 00:08:56.076 "seek_hole": false, 00:08:56.076 "seek_data": false, 00:08:56.076 "copy": false, 00:08:56.076 "nvme_iov_md": false 00:08:56.076 }, 00:08:56.076 "memory_domains": [ 00:08:56.076 { 00:08:56.076 "dma_device_id": "system", 00:08:56.076 "dma_device_type": 1 00:08:56.076 }, 00:08:56.076 { 00:08:56.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.076 "dma_device_type": 2 00:08:56.076 }, 00:08:56.076 { 00:08:56.076 "dma_device_id": "system", 00:08:56.076 "dma_device_type": 1 00:08:56.076 }, 00:08:56.076 { 00:08:56.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.076 "dma_device_type": 2 00:08:56.076 } 00:08:56.076 ], 00:08:56.076 "driver_specific": { 00:08:56.076 "raid": { 00:08:56.076 "uuid": "5788e98a-0846-4527-b84a-947e3d58c7bc", 00:08:56.076 "strip_size_kb": 64, 00:08:56.076 "state": "online", 00:08:56.076 "raid_level": "raid0", 00:08:56.076 "superblock": true, 00:08:56.076 "num_base_bdevs": 2, 00:08:56.076 "num_base_bdevs_discovered": 2, 00:08:56.076 "num_base_bdevs_operational": 2, 00:08:56.076 "base_bdevs_list": [ 00:08:56.076 { 00:08:56.076 "name": "pt1", 00:08:56.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.076 "is_configured": true, 00:08:56.076 "data_offset": 2048, 00:08:56.076 "data_size": 63488 00:08:56.076 }, 00:08:56.076 { 00:08:56.076 "name": "pt2", 00:08:56.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.076 "is_configured": true, 00:08:56.076 "data_offset": 2048, 00:08:56.076 "data_size": 63488 00:08:56.076 } 00:08:56.076 ] 00:08:56.076 } 00:08:56.076 } 00:08:56.076 }' 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:56.076 pt2' 00:08:56.076 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.336 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:56.337 [2024-11-27 09:45:57.344065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5788e98a-0846-4527-b84a-947e3d58c7bc '!=' 5788e98a-0846-4527-b84a-947e3d58c7bc ']' 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61463 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61463 ']' 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61463 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61463 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61463' 00:08:56.337 killing process with pid 61463 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61463 00:08:56.337 [2024-11-27 09:45:57.417196] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.337 [2024-11-27 09:45:57.417346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.337 09:45:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61463 00:08:56.337 [2024-11-27 09:45:57.417431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.337 [2024-11-27 09:45:57.417448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:56.595 [2024-11-27 09:45:57.631425] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.978 09:45:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:57.978 00:08:57.978 real 0m4.446s 00:08:57.978 user 0m6.017s 00:08:57.978 sys 0m0.848s 00:08:57.978 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.978 09:45:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.978 ************************************ 00:08:57.978 END TEST raid_superblock_test 00:08:57.978 ************************************ 00:08:57.978 09:45:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:57.978 09:45:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.978 09:45:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.978 09:45:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.978 ************************************ 00:08:57.978 START TEST raid_read_error_test 00:08:57.978 ************************************ 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rDGtl0ajsK 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61674 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61674 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61674 ']' 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.978 09:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.978 [2024-11-27 09:45:59.006260] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:08:57.978 [2024-11-27 09:45:59.006460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61674 ] 00:08:58.238 [2024-11-27 09:45:59.182616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.238 [2024-11-27 09:45:59.313952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.498 [2024-11-27 09:45:59.549003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.498 [2024-11-27 09:45:59.549081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.758 BaseBdev1_malloc 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.758 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.017 true 00:08:59.017 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.018 [2024-11-27 09:45:59.901894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.018 [2024-11-27 09:45:59.901955] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.018 [2024-11-27 09:45:59.901975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.018 [2024-11-27 09:45:59.901986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.018 [2024-11-27 09:45:59.904364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.018 [2024-11-27 09:45:59.904451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.018 BaseBdev1 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.018 BaseBdev2_malloc 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.018 true 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.018 [2024-11-27 09:45:59.974696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.018 [2024-11-27 09:45:59.974758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.018 [2024-11-27 09:45:59.974775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.018 [2024-11-27 09:45:59.974786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.018 [2024-11-27 09:45:59.977195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.018 [2024-11-27 09:45:59.977234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.018 BaseBdev2 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.018 [2024-11-27 09:45:59.986736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.018 [2024-11-27 09:45:59.988849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.018 [2024-11-27 09:45:59.989053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.018 [2024-11-27 09:45:59.989071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:59.018 [2024-11-27 09:45:59.989322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:59.018 [2024-11-27 09:45:59.989503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.018 [2024-11-27 09:45:59.989517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.018 [2024-11-27 09:45:59.989659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.018 09:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.018 09:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.018 09:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.018 "name": "raid_bdev1", 00:08:59.018 "uuid": "29d60fb4-b13d-45ec-86a9-7b36a8602f92", 00:08:59.018 "strip_size_kb": 64, 00:08:59.018 "state": "online", 00:08:59.018 "raid_level": "raid0", 00:08:59.018 "superblock": true, 00:08:59.018 "num_base_bdevs": 2, 00:08:59.018 "num_base_bdevs_discovered": 2, 00:08:59.018 "num_base_bdevs_operational": 2, 00:08:59.018 "base_bdevs_list": [ 00:08:59.018 { 00:08:59.018 "name": "BaseBdev1", 00:08:59.018 "uuid": "bc06daf0-2779-551c-9ad5-a7effd9f6347", 00:08:59.018 "is_configured": true, 00:08:59.018 "data_offset": 2048, 00:08:59.018 "data_size": 63488 00:08:59.018 }, 00:08:59.018 { 00:08:59.018 "name": "BaseBdev2", 00:08:59.018 "uuid": "500ce301-644b-5b33-90a7-df7251f8ad74", 00:08:59.018 "is_configured": true, 00:08:59.018 "data_offset": 2048, 00:08:59.018 "data_size": 63488 00:08:59.018 } 00:08:59.018 ] 00:08:59.018 }' 00:08:59.018 09:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.019 09:46:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.587 09:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:59.587 09:46:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.587 [2024-11-27 09:46:00.539408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.526 "name": "raid_bdev1", 00:09:00.526 "uuid": "29d60fb4-b13d-45ec-86a9-7b36a8602f92", 00:09:00.526 "strip_size_kb": 64, 00:09:00.526 "state": "online", 00:09:00.526 "raid_level": "raid0", 00:09:00.526 "superblock": true, 00:09:00.526 "num_base_bdevs": 2, 00:09:00.526 "num_base_bdevs_discovered": 2, 00:09:00.526 "num_base_bdevs_operational": 2, 00:09:00.526 "base_bdevs_list": [ 00:09:00.526 { 00:09:00.526 "name": "BaseBdev1", 00:09:00.526 "uuid": "bc06daf0-2779-551c-9ad5-a7effd9f6347", 00:09:00.526 "is_configured": true, 00:09:00.526 "data_offset": 2048, 00:09:00.526 "data_size": 63488 00:09:00.526 }, 00:09:00.526 { 00:09:00.526 "name": "BaseBdev2", 00:09:00.526 "uuid": "500ce301-644b-5b33-90a7-df7251f8ad74", 00:09:00.526 "is_configured": true, 00:09:00.526 "data_offset": 2048, 00:09:00.526 "data_size": 63488 00:09:00.526 } 00:09:00.526 ] 00:09:00.526 }' 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.526 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.813 [2024-11-27 09:46:01.803343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:00.813 [2024-11-27 09:46:01.803454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:00.813 [2024-11-27 09:46:01.806225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.813 [2024-11-27 09:46:01.806337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.813 [2024-11-27 09:46:01.806394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.813 [2024-11-27 09:46:01.806443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.813 { 00:09:00.813 "results": [ 00:09:00.813 { 00:09:00.813 "job": "raid_bdev1", 00:09:00.813 "core_mask": "0x1", 00:09:00.813 "workload": "randrw", 00:09:00.813 "percentage": 50, 00:09:00.813 "status": "finished", 00:09:00.813 "queue_depth": 1, 00:09:00.813 "io_size": 131072, 00:09:00.813 "runtime": 1.264213, 00:09:00.813 "iops": 14503.885025703737, 00:09:00.813 "mibps": 1812.9856282129672, 00:09:00.813 "io_failed": 1, 00:09:00.813 "io_timeout": 0, 00:09:00.813 "avg_latency_us": 96.63453713385945, 00:09:00.813 "min_latency_us": 25.2646288209607, 00:09:00.813 "max_latency_us": 1380.8349344978167 00:09:00.813 } 00:09:00.813 ], 00:09:00.813 "core_count": 1 00:09:00.813 } 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61674 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61674 ']' 00:09:00.813 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61674 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61674 00:09:00.814 killing process with pid 61674 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61674' 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61674 00:09:00.814 [2024-11-27 09:46:01.838687] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.814 09:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61674 00:09:01.080 [2024-11-27 09:46:01.979848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rDGtl0ajsK 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.79 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:02.462 ************************************ 00:09:02.462 END TEST raid_read_error_test 00:09:02.462 ************************************ 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.79 != \0\.\0\0 ]] 00:09:02.462 00:09:02.462 real 0m4.374s 00:09:02.462 user 0m5.041s 00:09:02.462 sys 0m0.636s 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.462 09:46:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.462 09:46:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:09:02.462 09:46:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.462 09:46:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.462 09:46:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.462 ************************************ 00:09:02.462 START TEST raid_write_error_test 00:09:02.462 ************************************ 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AyADiwIFx9 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61814 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.462 09:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61814 00:09:02.463 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61814 ']' 00:09:02.463 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.463 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.463 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.463 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.463 09:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.463 [2024-11-27 09:46:03.451194] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:02.463 [2024-11-27 09:46:03.451417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61814 ] 00:09:02.722 [2024-11-27 09:46:03.628740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.722 [2024-11-27 09:46:03.757017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.982 [2024-11-27 09:46:03.986501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.982 [2024-11-27 09:46:03.986677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.242 BaseBdev1_malloc 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.242 true 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.242 [2024-11-27 09:46:04.328305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.242 [2024-11-27 09:46:04.328366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.242 [2024-11-27 09:46:04.328388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:03.242 [2024-11-27 09:46:04.328398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.242 [2024-11-27 09:46:04.330792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.242 [2024-11-27 09:46:04.330834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.242 BaseBdev1 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.242 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.503 BaseBdev2_malloc 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.503 true 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.503 [2024-11-27 09:46:04.399433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.503 [2024-11-27 09:46:04.399491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.503 [2024-11-27 09:46:04.399523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.503 [2024-11-27 09:46:04.399535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.503 [2024-11-27 09:46:04.401913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.503 [2024-11-27 09:46:04.402013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.503 BaseBdev2 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.503 [2024-11-27 09:46:04.411491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.503 [2024-11-27 09:46:04.413583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.503 [2024-11-27 09:46:04.413826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.503 [2024-11-27 09:46:04.413848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:03.503 [2024-11-27 09:46:04.414101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:03.503 [2024-11-27 09:46:04.414288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.503 [2024-11-27 09:46:04.414301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:03.503 [2024-11-27 09:46:04.414473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.503 "name": "raid_bdev1", 00:09:03.503 "uuid": "52325360-c764-48fd-b00c-1cf78a8bb285", 00:09:03.503 "strip_size_kb": 64, 00:09:03.503 "state": "online", 00:09:03.503 "raid_level": "raid0", 00:09:03.503 "superblock": true, 00:09:03.503 "num_base_bdevs": 2, 00:09:03.503 "num_base_bdevs_discovered": 2, 00:09:03.503 "num_base_bdevs_operational": 2, 00:09:03.503 "base_bdevs_list": [ 00:09:03.503 { 00:09:03.503 "name": "BaseBdev1", 00:09:03.503 "uuid": "55f649f2-f20e-5d21-a3c4-9e24990b16bf", 00:09:03.503 "is_configured": true, 00:09:03.503 "data_offset": 2048, 00:09:03.503 "data_size": 63488 00:09:03.503 }, 00:09:03.503 { 00:09:03.503 "name": "BaseBdev2", 00:09:03.503 "uuid": "06bd3588-4c9d-582b-bcb0-7998e72669fc", 00:09:03.503 "is_configured": true, 00:09:03.503 "data_offset": 2048, 00:09:03.503 "data_size": 63488 00:09:03.503 } 00:09:03.503 ] 00:09:03.503 }' 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.503 09:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.763 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:03.763 09:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:04.023 [2024-11-27 09:46:04.931930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.961 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.962 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.962 "name": "raid_bdev1", 00:09:04.962 "uuid": "52325360-c764-48fd-b00c-1cf78a8bb285", 00:09:04.962 "strip_size_kb": 64, 00:09:04.962 "state": "online", 00:09:04.962 "raid_level": "raid0", 00:09:04.962 "superblock": true, 00:09:04.962 "num_base_bdevs": 2, 00:09:04.962 "num_base_bdevs_discovered": 2, 00:09:04.962 "num_base_bdevs_operational": 2, 00:09:04.962 "base_bdevs_list": [ 00:09:04.962 { 00:09:04.962 "name": "BaseBdev1", 00:09:04.962 "uuid": "55f649f2-f20e-5d21-a3c4-9e24990b16bf", 00:09:04.962 "is_configured": true, 00:09:04.962 "data_offset": 2048, 00:09:04.962 "data_size": 63488 00:09:04.962 }, 00:09:04.962 { 00:09:04.962 "name": "BaseBdev2", 00:09:04.962 "uuid": "06bd3588-4c9d-582b-bcb0-7998e72669fc", 00:09:04.962 "is_configured": true, 00:09:04.962 "data_offset": 2048, 00:09:04.962 "data_size": 63488 00:09:04.962 } 00:09:04.962 ] 00:09:04.962 }' 00:09:04.962 09:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.962 09:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.222 [2024-11-27 09:46:06.256163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.222 [2024-11-27 09:46:06.256271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.222 [2024-11-27 09:46:06.258969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.222 [2024-11-27 09:46:06.259088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.222 [2024-11-27 09:46:06.259148] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.222 [2024-11-27 09:46:06.259193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.222 { 00:09:05.222 "results": [ 00:09:05.222 { 00:09:05.222 "job": "raid_bdev1", 00:09:05.222 "core_mask": "0x1", 00:09:05.222 "workload": "randrw", 00:09:05.222 "percentage": 50, 00:09:05.222 "status": "finished", 00:09:05.222 "queue_depth": 1, 00:09:05.222 "io_size": 131072, 00:09:05.222 "runtime": 1.324813, 00:09:05.222 "iops": 14652.633994382604, 00:09:05.222 "mibps": 1831.5792492978255, 00:09:05.222 "io_failed": 1, 00:09:05.222 "io_timeout": 0, 00:09:05.222 "avg_latency_us": 95.50659309241522, 00:09:05.222 "min_latency_us": 24.817467248908297, 00:09:05.222 "max_latency_us": 1459.5353711790392 00:09:05.222 } 00:09:05.222 ], 00:09:05.222 "core_count": 1 00:09:05.222 } 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61814 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61814 ']' 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61814 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61814 00:09:05.222 killing process with pid 61814 00:09:05.222 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.223 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.223 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61814' 00:09:05.223 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61814 00:09:05.223 [2024-11-27 09:46:06.307792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.223 09:46:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61814 00:09:05.482 [2024-11-27 09:46:06.453258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AyADiwIFx9 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:06.861 ************************************ 00:09:06.861 END TEST raid_write_error_test 00:09:06.861 ************************************ 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:09:06.861 00:09:06.861 real 0m4.353s 00:09:06.861 user 0m5.043s 00:09:06.861 sys 0m0.648s 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.861 09:46:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.861 09:46:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:06.861 09:46:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:06.861 09:46:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.861 09:46:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.861 09:46:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.861 ************************************ 00:09:06.861 START TEST raid_state_function_test 00:09:06.861 ************************************ 00:09:06.861 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:06.861 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61958 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61958' 00:09:06.862 Process raid pid: 61958 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61958 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61958 ']' 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.862 09:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.862 [2024-11-27 09:46:07.868660] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:06.862 [2024-11-27 09:46:07.868798] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.121 [2024-11-27 09:46:08.048112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.121 [2024-11-27 09:46:08.176967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.380 [2024-11-27 09:46:08.413914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.380 [2024-11-27 09:46:08.414097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.640 [2024-11-27 09:46:08.695183] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.640 [2024-11-27 09:46:08.695248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.640 [2024-11-27 09:46:08.695275] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.640 [2024-11-27 09:46:08.695287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.640 "name": "Existed_Raid", 00:09:07.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.640 "strip_size_kb": 64, 00:09:07.640 "state": "configuring", 00:09:07.640 "raid_level": "concat", 00:09:07.640 "superblock": false, 00:09:07.640 "num_base_bdevs": 2, 00:09:07.640 "num_base_bdevs_discovered": 0, 00:09:07.640 "num_base_bdevs_operational": 2, 00:09:07.640 "base_bdevs_list": [ 00:09:07.640 { 00:09:07.640 "name": "BaseBdev1", 00:09:07.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.640 "is_configured": false, 00:09:07.640 "data_offset": 0, 00:09:07.640 "data_size": 0 00:09:07.640 }, 00:09:07.640 { 00:09:07.640 "name": "BaseBdev2", 00:09:07.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.640 "is_configured": false, 00:09:07.640 "data_offset": 0, 00:09:07.640 "data_size": 0 00:09:07.640 } 00:09:07.640 ] 00:09:07.640 }' 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.640 09:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.216 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.217 [2024-11-27 09:46:09.146334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.217 [2024-11-27 09:46:09.146445] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.217 [2024-11-27 09:46:09.158282] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.217 [2024-11-27 09:46:09.158373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.217 [2024-11-27 09:46:09.158404] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.217 [2024-11-27 09:46:09.158432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.217 [2024-11-27 09:46:09.211820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.217 BaseBdev1 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.217 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.218 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.218 [ 00:09:08.218 { 00:09:08.218 "name": "BaseBdev1", 00:09:08.218 "aliases": [ 00:09:08.218 "233152e8-0310-411a-947b-329d7ebd9a46" 00:09:08.218 ], 00:09:08.218 "product_name": "Malloc disk", 00:09:08.218 "block_size": 512, 00:09:08.218 "num_blocks": 65536, 00:09:08.218 "uuid": "233152e8-0310-411a-947b-329d7ebd9a46", 00:09:08.218 "assigned_rate_limits": { 00:09:08.218 "rw_ios_per_sec": 0, 00:09:08.218 "rw_mbytes_per_sec": 0, 00:09:08.218 "r_mbytes_per_sec": 0, 00:09:08.218 "w_mbytes_per_sec": 0 00:09:08.218 }, 00:09:08.218 "claimed": true, 00:09:08.218 "claim_type": "exclusive_write", 00:09:08.218 "zoned": false, 00:09:08.218 "supported_io_types": { 00:09:08.218 "read": true, 00:09:08.218 "write": true, 00:09:08.218 "unmap": true, 00:09:08.219 "flush": true, 00:09:08.219 "reset": true, 00:09:08.219 "nvme_admin": false, 00:09:08.219 "nvme_io": false, 00:09:08.219 "nvme_io_md": false, 00:09:08.219 "write_zeroes": true, 00:09:08.219 "zcopy": true, 00:09:08.219 "get_zone_info": false, 00:09:08.219 "zone_management": false, 00:09:08.219 "zone_append": false, 00:09:08.219 "compare": false, 00:09:08.219 "compare_and_write": false, 00:09:08.219 "abort": true, 00:09:08.219 "seek_hole": false, 00:09:08.219 "seek_data": false, 00:09:08.219 "copy": true, 00:09:08.219 "nvme_iov_md": false 00:09:08.219 }, 00:09:08.219 "memory_domains": [ 00:09:08.219 { 00:09:08.219 "dma_device_id": "system", 00:09:08.219 "dma_device_type": 1 00:09:08.219 }, 00:09:08.219 { 00:09:08.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.219 "dma_device_type": 2 00:09:08.219 } 00:09:08.219 ], 00:09:08.219 "driver_specific": {} 00:09:08.219 } 00:09:08.219 ] 00:09:08.219 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.219 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:08.219 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:08.219 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.219 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.219 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.220 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.220 "name": "Existed_Raid", 00:09:08.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.220 "strip_size_kb": 64, 00:09:08.220 "state": "configuring", 00:09:08.220 "raid_level": "concat", 00:09:08.220 "superblock": false, 00:09:08.220 "num_base_bdevs": 2, 00:09:08.220 "num_base_bdevs_discovered": 1, 00:09:08.220 "num_base_bdevs_operational": 2, 00:09:08.220 "base_bdevs_list": [ 00:09:08.220 { 00:09:08.220 "name": "BaseBdev1", 00:09:08.220 "uuid": "233152e8-0310-411a-947b-329d7ebd9a46", 00:09:08.220 "is_configured": true, 00:09:08.220 "data_offset": 0, 00:09:08.220 "data_size": 65536 00:09:08.220 }, 00:09:08.220 { 00:09:08.220 "name": "BaseBdev2", 00:09:08.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.220 "is_configured": false, 00:09:08.221 "data_offset": 0, 00:09:08.221 "data_size": 0 00:09:08.221 } 00:09:08.221 ] 00:09:08.221 }' 00:09:08.221 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.221 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.792 [2024-11-27 09:46:09.723013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.792 [2024-11-27 09:46:09.723079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.792 [2024-11-27 09:46:09.735031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.792 [2024-11-27 09:46:09.737320] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.792 [2024-11-27 09:46:09.737369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.792 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.792 "name": "Existed_Raid", 00:09:08.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.792 "strip_size_kb": 64, 00:09:08.792 "state": "configuring", 00:09:08.792 "raid_level": "concat", 00:09:08.792 "superblock": false, 00:09:08.792 "num_base_bdevs": 2, 00:09:08.792 "num_base_bdevs_discovered": 1, 00:09:08.792 "num_base_bdevs_operational": 2, 00:09:08.792 "base_bdevs_list": [ 00:09:08.792 { 00:09:08.792 "name": "BaseBdev1", 00:09:08.792 "uuid": "233152e8-0310-411a-947b-329d7ebd9a46", 00:09:08.792 "is_configured": true, 00:09:08.792 "data_offset": 0, 00:09:08.792 "data_size": 65536 00:09:08.792 }, 00:09:08.792 { 00:09:08.792 "name": "BaseBdev2", 00:09:08.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.792 "is_configured": false, 00:09:08.793 "data_offset": 0, 00:09:08.793 "data_size": 0 00:09:08.793 } 00:09:08.793 ] 00:09:08.793 }' 00:09:08.793 09:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.793 09:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.053 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.053 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.053 [2024-11-27 09:46:10.180813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.053 [2024-11-27 09:46:10.180982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.053 [2024-11-27 09:46:10.181051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:09.053 [2024-11-27 09:46:10.181408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.053 [2024-11-27 09:46:10.181668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.053 [2024-11-27 09:46:10.181716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:09.053 [2024-11-27 09:46:10.182072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.053 BaseBdev2 00:09:09.053 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.053 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.314 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.314 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.314 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:09.314 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.314 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.315 [ 00:09:09.315 { 00:09:09.315 "name": "BaseBdev2", 00:09:09.315 "aliases": [ 00:09:09.315 "73a46fae-faaf-40ef-acbc-1c8fc0c0bba5" 00:09:09.315 ], 00:09:09.315 "product_name": "Malloc disk", 00:09:09.315 "block_size": 512, 00:09:09.315 "num_blocks": 65536, 00:09:09.315 "uuid": "73a46fae-faaf-40ef-acbc-1c8fc0c0bba5", 00:09:09.315 "assigned_rate_limits": { 00:09:09.315 "rw_ios_per_sec": 0, 00:09:09.315 "rw_mbytes_per_sec": 0, 00:09:09.315 "r_mbytes_per_sec": 0, 00:09:09.315 "w_mbytes_per_sec": 0 00:09:09.315 }, 00:09:09.315 "claimed": true, 00:09:09.315 "claim_type": "exclusive_write", 00:09:09.315 "zoned": false, 00:09:09.315 "supported_io_types": { 00:09:09.315 "read": true, 00:09:09.315 "write": true, 00:09:09.315 "unmap": true, 00:09:09.315 "flush": true, 00:09:09.315 "reset": true, 00:09:09.315 "nvme_admin": false, 00:09:09.315 "nvme_io": false, 00:09:09.315 "nvme_io_md": false, 00:09:09.315 "write_zeroes": true, 00:09:09.315 "zcopy": true, 00:09:09.315 "get_zone_info": false, 00:09:09.315 "zone_management": false, 00:09:09.315 "zone_append": false, 00:09:09.315 "compare": false, 00:09:09.315 "compare_and_write": false, 00:09:09.315 "abort": true, 00:09:09.315 "seek_hole": false, 00:09:09.315 "seek_data": false, 00:09:09.315 "copy": true, 00:09:09.315 "nvme_iov_md": false 00:09:09.315 }, 00:09:09.315 "memory_domains": [ 00:09:09.315 { 00:09:09.315 "dma_device_id": "system", 00:09:09.315 "dma_device_type": 1 00:09:09.315 }, 00:09:09.315 { 00:09:09.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.315 "dma_device_type": 2 00:09:09.315 } 00:09:09.315 ], 00:09:09.315 "driver_specific": {} 00:09:09.315 } 00:09:09.315 ] 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.315 "name": "Existed_Raid", 00:09:09.315 "uuid": "8d256b0a-b5ce-4052-b048-cb85f30ff9f5", 00:09:09.315 "strip_size_kb": 64, 00:09:09.315 "state": "online", 00:09:09.315 "raid_level": "concat", 00:09:09.315 "superblock": false, 00:09:09.315 "num_base_bdevs": 2, 00:09:09.315 "num_base_bdevs_discovered": 2, 00:09:09.315 "num_base_bdevs_operational": 2, 00:09:09.315 "base_bdevs_list": [ 00:09:09.315 { 00:09:09.315 "name": "BaseBdev1", 00:09:09.315 "uuid": "233152e8-0310-411a-947b-329d7ebd9a46", 00:09:09.315 "is_configured": true, 00:09:09.315 "data_offset": 0, 00:09:09.315 "data_size": 65536 00:09:09.315 }, 00:09:09.315 { 00:09:09.315 "name": "BaseBdev2", 00:09:09.315 "uuid": "73a46fae-faaf-40ef-acbc-1c8fc0c0bba5", 00:09:09.315 "is_configured": true, 00:09:09.315 "data_offset": 0, 00:09:09.315 "data_size": 65536 00:09:09.315 } 00:09:09.315 ] 00:09:09.315 }' 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.315 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.576 [2024-11-27 09:46:10.680403] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.576 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.838 "name": "Existed_Raid", 00:09:09.838 "aliases": [ 00:09:09.838 "8d256b0a-b5ce-4052-b048-cb85f30ff9f5" 00:09:09.838 ], 00:09:09.838 "product_name": "Raid Volume", 00:09:09.838 "block_size": 512, 00:09:09.838 "num_blocks": 131072, 00:09:09.838 "uuid": "8d256b0a-b5ce-4052-b048-cb85f30ff9f5", 00:09:09.838 "assigned_rate_limits": { 00:09:09.838 "rw_ios_per_sec": 0, 00:09:09.838 "rw_mbytes_per_sec": 0, 00:09:09.838 "r_mbytes_per_sec": 0, 00:09:09.838 "w_mbytes_per_sec": 0 00:09:09.838 }, 00:09:09.838 "claimed": false, 00:09:09.838 "zoned": false, 00:09:09.838 "supported_io_types": { 00:09:09.838 "read": true, 00:09:09.838 "write": true, 00:09:09.838 "unmap": true, 00:09:09.838 "flush": true, 00:09:09.838 "reset": true, 00:09:09.838 "nvme_admin": false, 00:09:09.838 "nvme_io": false, 00:09:09.838 "nvme_io_md": false, 00:09:09.838 "write_zeroes": true, 00:09:09.838 "zcopy": false, 00:09:09.838 "get_zone_info": false, 00:09:09.838 "zone_management": false, 00:09:09.838 "zone_append": false, 00:09:09.838 "compare": false, 00:09:09.838 "compare_and_write": false, 00:09:09.838 "abort": false, 00:09:09.838 "seek_hole": false, 00:09:09.838 "seek_data": false, 00:09:09.838 "copy": false, 00:09:09.838 "nvme_iov_md": false 00:09:09.838 }, 00:09:09.838 "memory_domains": [ 00:09:09.838 { 00:09:09.838 "dma_device_id": "system", 00:09:09.838 "dma_device_type": 1 00:09:09.838 }, 00:09:09.838 { 00:09:09.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.838 "dma_device_type": 2 00:09:09.838 }, 00:09:09.838 { 00:09:09.838 "dma_device_id": "system", 00:09:09.838 "dma_device_type": 1 00:09:09.838 }, 00:09:09.838 { 00:09:09.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.838 "dma_device_type": 2 00:09:09.838 } 00:09:09.838 ], 00:09:09.838 "driver_specific": { 00:09:09.838 "raid": { 00:09:09.838 "uuid": "8d256b0a-b5ce-4052-b048-cb85f30ff9f5", 00:09:09.838 "strip_size_kb": 64, 00:09:09.838 "state": "online", 00:09:09.838 "raid_level": "concat", 00:09:09.838 "superblock": false, 00:09:09.838 "num_base_bdevs": 2, 00:09:09.838 "num_base_bdevs_discovered": 2, 00:09:09.838 "num_base_bdevs_operational": 2, 00:09:09.838 "base_bdevs_list": [ 00:09:09.838 { 00:09:09.838 "name": "BaseBdev1", 00:09:09.838 "uuid": "233152e8-0310-411a-947b-329d7ebd9a46", 00:09:09.838 "is_configured": true, 00:09:09.838 "data_offset": 0, 00:09:09.838 "data_size": 65536 00:09:09.838 }, 00:09:09.838 { 00:09:09.838 "name": "BaseBdev2", 00:09:09.838 "uuid": "73a46fae-faaf-40ef-acbc-1c8fc0c0bba5", 00:09:09.838 "is_configured": true, 00:09:09.838 "data_offset": 0, 00:09:09.838 "data_size": 65536 00:09:09.838 } 00:09:09.838 ] 00:09:09.838 } 00:09:09.838 } 00:09:09.838 }' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.838 BaseBdev2' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.838 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.838 [2024-11-27 09:46:10.875783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.838 [2024-11-27 09:46:10.875867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.838 [2024-11-27 09:46:10.875956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.099 09:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.099 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.099 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.099 "name": "Existed_Raid", 00:09:10.099 "uuid": "8d256b0a-b5ce-4052-b048-cb85f30ff9f5", 00:09:10.099 "strip_size_kb": 64, 00:09:10.099 "state": "offline", 00:09:10.099 "raid_level": "concat", 00:09:10.099 "superblock": false, 00:09:10.099 "num_base_bdevs": 2, 00:09:10.099 "num_base_bdevs_discovered": 1, 00:09:10.099 "num_base_bdevs_operational": 1, 00:09:10.099 "base_bdevs_list": [ 00:09:10.099 { 00:09:10.099 "name": null, 00:09:10.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.099 "is_configured": false, 00:09:10.099 "data_offset": 0, 00:09:10.099 "data_size": 65536 00:09:10.099 }, 00:09:10.099 { 00:09:10.099 "name": "BaseBdev2", 00:09:10.099 "uuid": "73a46fae-faaf-40ef-acbc-1c8fc0c0bba5", 00:09:10.099 "is_configured": true, 00:09:10.099 "data_offset": 0, 00:09:10.099 "data_size": 65536 00:09:10.099 } 00:09:10.099 ] 00:09:10.099 }' 00:09:10.099 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.099 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.360 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.360 [2024-11-27 09:46:11.431946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.360 [2024-11-27 09:46:11.432026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61958 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61958 ']' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61958 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61958 00:09:10.621 killing process with pid 61958 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61958' 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61958 00:09:10.621 [2024-11-27 09:46:11.638536] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.621 09:46:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61958 00:09:10.621 [2024-11-27 09:46:11.656415] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.008 00:09:12.008 real 0m5.126s 00:09:12.008 user 0m7.204s 00:09:12.008 sys 0m0.911s 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 ************************************ 00:09:12.008 END TEST raid_state_function_test 00:09:12.008 ************************************ 00:09:12.008 09:46:12 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:12.008 09:46:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:12.008 09:46:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.008 09:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 ************************************ 00:09:12.008 START TEST raid_state_function_test_sb 00:09:12.008 ************************************ 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62211 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62211' 00:09:12.008 Process raid pid: 62211 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62211 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62211 ']' 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.008 09:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.008 [2024-11-27 09:46:13.057582] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:12.008 [2024-11-27 09:46:13.057852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.268 [2024-11-27 09:46:13.237927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.268 [2024-11-27 09:46:13.374455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.528 [2024-11-27 09:46:13.612434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.528 [2024-11-27 09:46:13.612590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.788 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.788 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:12.788 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:12.788 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.788 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.048 [2024-11-27 09:46:13.921456] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.048 [2024-11-27 09:46:13.921569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.048 [2024-11-27 09:46:13.921592] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.048 [2024-11-27 09:46:13.921603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.048 "name": "Existed_Raid", 00:09:13.048 "uuid": "c35d3ab0-d99c-4c30-967a-f02c859a8924", 00:09:13.048 "strip_size_kb": 64, 00:09:13.048 "state": "configuring", 00:09:13.048 "raid_level": "concat", 00:09:13.048 "superblock": true, 00:09:13.048 "num_base_bdevs": 2, 00:09:13.048 "num_base_bdevs_discovered": 0, 00:09:13.048 "num_base_bdevs_operational": 2, 00:09:13.048 "base_bdevs_list": [ 00:09:13.048 { 00:09:13.048 "name": "BaseBdev1", 00:09:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.048 "is_configured": false, 00:09:13.048 "data_offset": 0, 00:09:13.048 "data_size": 0 00:09:13.048 }, 00:09:13.048 { 00:09:13.048 "name": "BaseBdev2", 00:09:13.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.048 "is_configured": false, 00:09:13.048 "data_offset": 0, 00:09:13.048 "data_size": 0 00:09:13.048 } 00:09:13.048 ] 00:09:13.048 }' 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.048 09:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 [2024-11-27 09:46:14.348658] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.308 [2024-11-27 09:46:14.348758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 [2024-11-27 09:46:14.360625] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.308 [2024-11-27 09:46:14.360712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.308 [2024-11-27 09:46:14.360740] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.308 [2024-11-27 09:46:14.360767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 [2024-11-27 09:46:14.413341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.308 BaseBdev1 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.308 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.568 [ 00:09:13.568 { 00:09:13.568 "name": "BaseBdev1", 00:09:13.568 "aliases": [ 00:09:13.568 "1f5a1193-124d-4a72-a819-e9ae6d65aa69" 00:09:13.568 ], 00:09:13.568 "product_name": "Malloc disk", 00:09:13.568 "block_size": 512, 00:09:13.568 "num_blocks": 65536, 00:09:13.568 "uuid": "1f5a1193-124d-4a72-a819-e9ae6d65aa69", 00:09:13.568 "assigned_rate_limits": { 00:09:13.568 "rw_ios_per_sec": 0, 00:09:13.568 "rw_mbytes_per_sec": 0, 00:09:13.568 "r_mbytes_per_sec": 0, 00:09:13.568 "w_mbytes_per_sec": 0 00:09:13.568 }, 00:09:13.568 "claimed": true, 00:09:13.568 "claim_type": "exclusive_write", 00:09:13.569 "zoned": false, 00:09:13.569 "supported_io_types": { 00:09:13.569 "read": true, 00:09:13.569 "write": true, 00:09:13.569 "unmap": true, 00:09:13.569 "flush": true, 00:09:13.569 "reset": true, 00:09:13.569 "nvme_admin": false, 00:09:13.569 "nvme_io": false, 00:09:13.569 "nvme_io_md": false, 00:09:13.569 "write_zeroes": true, 00:09:13.569 "zcopy": true, 00:09:13.569 "get_zone_info": false, 00:09:13.569 "zone_management": false, 00:09:13.569 "zone_append": false, 00:09:13.569 "compare": false, 00:09:13.569 "compare_and_write": false, 00:09:13.569 "abort": true, 00:09:13.569 "seek_hole": false, 00:09:13.569 "seek_data": false, 00:09:13.569 "copy": true, 00:09:13.569 "nvme_iov_md": false 00:09:13.569 }, 00:09:13.569 "memory_domains": [ 00:09:13.569 { 00:09:13.569 "dma_device_id": "system", 00:09:13.569 "dma_device_type": 1 00:09:13.569 }, 00:09:13.569 { 00:09:13.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.569 "dma_device_type": 2 00:09:13.569 } 00:09:13.569 ], 00:09:13.569 "driver_specific": {} 00:09:13.569 } 00:09:13.569 ] 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.569 "name": "Existed_Raid", 00:09:13.569 "uuid": "d5af8847-0222-40e8-b2d2-63d03e52d1cb", 00:09:13.569 "strip_size_kb": 64, 00:09:13.569 "state": "configuring", 00:09:13.569 "raid_level": "concat", 00:09:13.569 "superblock": true, 00:09:13.569 "num_base_bdevs": 2, 00:09:13.569 "num_base_bdevs_discovered": 1, 00:09:13.569 "num_base_bdevs_operational": 2, 00:09:13.569 "base_bdevs_list": [ 00:09:13.569 { 00:09:13.569 "name": "BaseBdev1", 00:09:13.569 "uuid": "1f5a1193-124d-4a72-a819-e9ae6d65aa69", 00:09:13.569 "is_configured": true, 00:09:13.569 "data_offset": 2048, 00:09:13.569 "data_size": 63488 00:09:13.569 }, 00:09:13.569 { 00:09:13.569 "name": "BaseBdev2", 00:09:13.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.569 "is_configured": false, 00:09:13.569 "data_offset": 0, 00:09:13.569 "data_size": 0 00:09:13.569 } 00:09:13.569 ] 00:09:13.569 }' 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.569 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.828 [2024-11-27 09:46:14.860608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.828 [2024-11-27 09:46:14.860711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.828 [2024-11-27 09:46:14.872637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.828 [2024-11-27 09:46:14.874782] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.828 [2024-11-27 09:46:14.874828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.828 "name": "Existed_Raid", 00:09:13.828 "uuid": "4bc2e1a8-b501-466f-b00a-ae2050d219a6", 00:09:13.828 "strip_size_kb": 64, 00:09:13.828 "state": "configuring", 00:09:13.828 "raid_level": "concat", 00:09:13.828 "superblock": true, 00:09:13.828 "num_base_bdevs": 2, 00:09:13.828 "num_base_bdevs_discovered": 1, 00:09:13.828 "num_base_bdevs_operational": 2, 00:09:13.828 "base_bdevs_list": [ 00:09:13.828 { 00:09:13.828 "name": "BaseBdev1", 00:09:13.828 "uuid": "1f5a1193-124d-4a72-a819-e9ae6d65aa69", 00:09:13.828 "is_configured": true, 00:09:13.828 "data_offset": 2048, 00:09:13.828 "data_size": 63488 00:09:13.828 }, 00:09:13.828 { 00:09:13.828 "name": "BaseBdev2", 00:09:13.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.828 "is_configured": false, 00:09:13.828 "data_offset": 0, 00:09:13.828 "data_size": 0 00:09:13.828 } 00:09:13.828 ] 00:09:13.828 }' 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.828 09:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.400 [2024-11-27 09:46:15.357419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.400 [2024-11-27 09:46:15.357833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.400 [2024-11-27 09:46:15.357888] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.400 [2024-11-27 09:46:15.358216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:14.400 BaseBdev2 00:09:14.400 [2024-11-27 09:46:15.358436] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.400 [2024-11-27 09:46:15.358485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:14.400 [2024-11-27 09:46:15.358682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.400 [ 00:09:14.400 { 00:09:14.400 "name": "BaseBdev2", 00:09:14.400 "aliases": [ 00:09:14.400 "080df553-e5a6-407b-978e-324487d7947f" 00:09:14.400 ], 00:09:14.400 "product_name": "Malloc disk", 00:09:14.400 "block_size": 512, 00:09:14.400 "num_blocks": 65536, 00:09:14.400 "uuid": "080df553-e5a6-407b-978e-324487d7947f", 00:09:14.400 "assigned_rate_limits": { 00:09:14.400 "rw_ios_per_sec": 0, 00:09:14.400 "rw_mbytes_per_sec": 0, 00:09:14.400 "r_mbytes_per_sec": 0, 00:09:14.400 "w_mbytes_per_sec": 0 00:09:14.400 }, 00:09:14.400 "claimed": true, 00:09:14.400 "claim_type": "exclusive_write", 00:09:14.400 "zoned": false, 00:09:14.400 "supported_io_types": { 00:09:14.400 "read": true, 00:09:14.400 "write": true, 00:09:14.400 "unmap": true, 00:09:14.400 "flush": true, 00:09:14.400 "reset": true, 00:09:14.400 "nvme_admin": false, 00:09:14.400 "nvme_io": false, 00:09:14.400 "nvme_io_md": false, 00:09:14.400 "write_zeroes": true, 00:09:14.400 "zcopy": true, 00:09:14.400 "get_zone_info": false, 00:09:14.400 "zone_management": false, 00:09:14.400 "zone_append": false, 00:09:14.400 "compare": false, 00:09:14.400 "compare_and_write": false, 00:09:14.400 "abort": true, 00:09:14.400 "seek_hole": false, 00:09:14.400 "seek_data": false, 00:09:14.400 "copy": true, 00:09:14.400 "nvme_iov_md": false 00:09:14.400 }, 00:09:14.400 "memory_domains": [ 00:09:14.400 { 00:09:14.400 "dma_device_id": "system", 00:09:14.400 "dma_device_type": 1 00:09:14.400 }, 00:09:14.400 { 00:09:14.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.400 "dma_device_type": 2 00:09:14.400 } 00:09:14.400 ], 00:09:14.400 "driver_specific": {} 00:09:14.400 } 00:09:14.400 ] 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.400 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.401 "name": "Existed_Raid", 00:09:14.401 "uuid": "4bc2e1a8-b501-466f-b00a-ae2050d219a6", 00:09:14.401 "strip_size_kb": 64, 00:09:14.401 "state": "online", 00:09:14.401 "raid_level": "concat", 00:09:14.401 "superblock": true, 00:09:14.401 "num_base_bdevs": 2, 00:09:14.401 "num_base_bdevs_discovered": 2, 00:09:14.401 "num_base_bdevs_operational": 2, 00:09:14.401 "base_bdevs_list": [ 00:09:14.401 { 00:09:14.401 "name": "BaseBdev1", 00:09:14.401 "uuid": "1f5a1193-124d-4a72-a819-e9ae6d65aa69", 00:09:14.401 "is_configured": true, 00:09:14.401 "data_offset": 2048, 00:09:14.401 "data_size": 63488 00:09:14.401 }, 00:09:14.401 { 00:09:14.401 "name": "BaseBdev2", 00:09:14.401 "uuid": "080df553-e5a6-407b-978e-324487d7947f", 00:09:14.401 "is_configured": true, 00:09:14.401 "data_offset": 2048, 00:09:14.401 "data_size": 63488 00:09:14.401 } 00:09:14.401 ] 00:09:14.401 }' 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.401 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.971 [2024-11-27 09:46:15.848888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.971 "name": "Existed_Raid", 00:09:14.971 "aliases": [ 00:09:14.971 "4bc2e1a8-b501-466f-b00a-ae2050d219a6" 00:09:14.971 ], 00:09:14.971 "product_name": "Raid Volume", 00:09:14.971 "block_size": 512, 00:09:14.971 "num_blocks": 126976, 00:09:14.971 "uuid": "4bc2e1a8-b501-466f-b00a-ae2050d219a6", 00:09:14.971 "assigned_rate_limits": { 00:09:14.971 "rw_ios_per_sec": 0, 00:09:14.971 "rw_mbytes_per_sec": 0, 00:09:14.971 "r_mbytes_per_sec": 0, 00:09:14.971 "w_mbytes_per_sec": 0 00:09:14.971 }, 00:09:14.971 "claimed": false, 00:09:14.971 "zoned": false, 00:09:14.971 "supported_io_types": { 00:09:14.971 "read": true, 00:09:14.971 "write": true, 00:09:14.971 "unmap": true, 00:09:14.971 "flush": true, 00:09:14.971 "reset": true, 00:09:14.971 "nvme_admin": false, 00:09:14.971 "nvme_io": false, 00:09:14.971 "nvme_io_md": false, 00:09:14.971 "write_zeroes": true, 00:09:14.971 "zcopy": false, 00:09:14.971 "get_zone_info": false, 00:09:14.971 "zone_management": false, 00:09:14.971 "zone_append": false, 00:09:14.971 "compare": false, 00:09:14.971 "compare_and_write": false, 00:09:14.971 "abort": false, 00:09:14.971 "seek_hole": false, 00:09:14.971 "seek_data": false, 00:09:14.971 "copy": false, 00:09:14.971 "nvme_iov_md": false 00:09:14.971 }, 00:09:14.971 "memory_domains": [ 00:09:14.971 { 00:09:14.971 "dma_device_id": "system", 00:09:14.971 "dma_device_type": 1 00:09:14.971 }, 00:09:14.971 { 00:09:14.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.971 "dma_device_type": 2 00:09:14.971 }, 00:09:14.971 { 00:09:14.971 "dma_device_id": "system", 00:09:14.971 "dma_device_type": 1 00:09:14.971 }, 00:09:14.971 { 00:09:14.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.971 "dma_device_type": 2 00:09:14.971 } 00:09:14.971 ], 00:09:14.971 "driver_specific": { 00:09:14.971 "raid": { 00:09:14.971 "uuid": "4bc2e1a8-b501-466f-b00a-ae2050d219a6", 00:09:14.971 "strip_size_kb": 64, 00:09:14.971 "state": "online", 00:09:14.971 "raid_level": "concat", 00:09:14.971 "superblock": true, 00:09:14.971 "num_base_bdevs": 2, 00:09:14.971 "num_base_bdevs_discovered": 2, 00:09:14.971 "num_base_bdevs_operational": 2, 00:09:14.971 "base_bdevs_list": [ 00:09:14.971 { 00:09:14.971 "name": "BaseBdev1", 00:09:14.971 "uuid": "1f5a1193-124d-4a72-a819-e9ae6d65aa69", 00:09:14.971 "is_configured": true, 00:09:14.971 "data_offset": 2048, 00:09:14.971 "data_size": 63488 00:09:14.971 }, 00:09:14.971 { 00:09:14.971 "name": "BaseBdev2", 00:09:14.971 "uuid": "080df553-e5a6-407b-978e-324487d7947f", 00:09:14.971 "is_configured": true, 00:09:14.971 "data_offset": 2048, 00:09:14.971 "data_size": 63488 00:09:14.971 } 00:09:14.971 ] 00:09:14.971 } 00:09:14.971 } 00:09:14.971 }' 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:14.971 BaseBdev2' 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.971 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.972 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:14.972 09:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.972 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.972 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 09:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.972 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.972 [2024-11-27 09:46:16.072277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.972 [2024-11-27 09:46:16.072314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.972 [2024-11-27 09:46:16.072368] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.232 "name": "Existed_Raid", 00:09:15.232 "uuid": "4bc2e1a8-b501-466f-b00a-ae2050d219a6", 00:09:15.232 "strip_size_kb": 64, 00:09:15.232 "state": "offline", 00:09:15.232 "raid_level": "concat", 00:09:15.232 "superblock": true, 00:09:15.232 "num_base_bdevs": 2, 00:09:15.232 "num_base_bdevs_discovered": 1, 00:09:15.232 "num_base_bdevs_operational": 1, 00:09:15.232 "base_bdevs_list": [ 00:09:15.232 { 00:09:15.232 "name": null, 00:09:15.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.232 "is_configured": false, 00:09:15.232 "data_offset": 0, 00:09:15.232 "data_size": 63488 00:09:15.232 }, 00:09:15.232 { 00:09:15.232 "name": "BaseBdev2", 00:09:15.232 "uuid": "080df553-e5a6-407b-978e-324487d7947f", 00:09:15.232 "is_configured": true, 00:09:15.232 "data_offset": 2048, 00:09:15.232 "data_size": 63488 00:09:15.232 } 00:09:15.232 ] 00:09:15.232 }' 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.232 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.803 [2024-11-27 09:46:16.696951] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.803 [2024-11-27 09:46:16.697083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.803 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62211 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62211 ']' 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62211 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62211 00:09:15.804 killing process with pid 62211 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62211' 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62211 00:09:15.804 [2024-11-27 09:46:16.889455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.804 09:46:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62211 00:09:15.804 [2024-11-27 09:46:16.908089] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.186 ************************************ 00:09:17.186 END TEST raid_state_function_test_sb 00:09:17.186 ************************************ 00:09:17.186 09:46:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.186 00:09:17.186 real 0m5.186s 00:09:17.186 user 0m7.347s 00:09:17.186 sys 0m0.903s 00:09:17.186 09:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.186 09:46:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.186 09:46:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:17.186 09:46:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:17.186 09:46:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.186 09:46:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.186 ************************************ 00:09:17.186 START TEST raid_superblock_test 00:09:17.186 ************************************ 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62463 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62463 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62463 ']' 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:17.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.186 09:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.186 [2024-11-27 09:46:18.298150] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:17.186 [2024-11-27 09:46:18.298417] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62463 ] 00:09:17.446 [2024-11-27 09:46:18.477877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.706 [2024-11-27 09:46:18.620084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.966 [2024-11-27 09:46:18.859292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.966 [2024-11-27 09:46:18.859475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 malloc1 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 [2024-11-27 09:46:19.207698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:18.227 [2024-11-27 09:46:19.207834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.227 [2024-11-27 09:46:19.207888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:18.227 [2024-11-27 09:46:19.207933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.227 [2024-11-27 09:46:19.210682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.227 [2024-11-27 09:46:19.210760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:18.227 pt1 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 malloc2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 [2024-11-27 09:46:19.262835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:18.227 [2024-11-27 09:46:19.262903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.227 [2024-11-27 09:46:19.262937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:18.227 [2024-11-27 09:46:19.262947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.227 [2024-11-27 09:46:19.265578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.227 [2024-11-27 09:46:19.265660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:18.227 pt2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 [2024-11-27 09:46:19.270891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:18.227 [2024-11-27 09:46:19.273206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:18.227 [2024-11-27 09:46:19.273466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:18.227 [2024-11-27 09:46:19.273519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.227 [2024-11-27 09:46:19.273857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:18.227 [2024-11-27 09:46:19.274104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:18.227 [2024-11-27 09:46:19.274152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:18.227 [2024-11-27 09:46:19.274413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.227 "name": "raid_bdev1", 00:09:18.227 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:18.227 "strip_size_kb": 64, 00:09:18.227 "state": "online", 00:09:18.227 "raid_level": "concat", 00:09:18.227 "superblock": true, 00:09:18.227 "num_base_bdevs": 2, 00:09:18.227 "num_base_bdevs_discovered": 2, 00:09:18.227 "num_base_bdevs_operational": 2, 00:09:18.227 "base_bdevs_list": [ 00:09:18.227 { 00:09:18.227 "name": "pt1", 00:09:18.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.227 "is_configured": true, 00:09:18.227 "data_offset": 2048, 00:09:18.227 "data_size": 63488 00:09:18.227 }, 00:09:18.227 { 00:09:18.227 "name": "pt2", 00:09:18.227 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.227 "is_configured": true, 00:09:18.227 "data_offset": 2048, 00:09:18.227 "data_size": 63488 00:09:18.227 } 00:09:18.227 ] 00:09:18.227 }' 00:09:18.227 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.228 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.797 [2024-11-27 09:46:19.678511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.797 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.797 "name": "raid_bdev1", 00:09:18.797 "aliases": [ 00:09:18.797 "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63" 00:09:18.797 ], 00:09:18.797 "product_name": "Raid Volume", 00:09:18.797 "block_size": 512, 00:09:18.797 "num_blocks": 126976, 00:09:18.797 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:18.797 "assigned_rate_limits": { 00:09:18.797 "rw_ios_per_sec": 0, 00:09:18.797 "rw_mbytes_per_sec": 0, 00:09:18.797 "r_mbytes_per_sec": 0, 00:09:18.797 "w_mbytes_per_sec": 0 00:09:18.797 }, 00:09:18.797 "claimed": false, 00:09:18.797 "zoned": false, 00:09:18.797 "supported_io_types": { 00:09:18.797 "read": true, 00:09:18.797 "write": true, 00:09:18.797 "unmap": true, 00:09:18.797 "flush": true, 00:09:18.797 "reset": true, 00:09:18.797 "nvme_admin": false, 00:09:18.797 "nvme_io": false, 00:09:18.797 "nvme_io_md": false, 00:09:18.797 "write_zeroes": true, 00:09:18.797 "zcopy": false, 00:09:18.797 "get_zone_info": false, 00:09:18.797 "zone_management": false, 00:09:18.797 "zone_append": false, 00:09:18.797 "compare": false, 00:09:18.797 "compare_and_write": false, 00:09:18.797 "abort": false, 00:09:18.797 "seek_hole": false, 00:09:18.797 "seek_data": false, 00:09:18.797 "copy": false, 00:09:18.797 "nvme_iov_md": false 00:09:18.797 }, 00:09:18.797 "memory_domains": [ 00:09:18.797 { 00:09:18.797 "dma_device_id": "system", 00:09:18.797 "dma_device_type": 1 00:09:18.797 }, 00:09:18.797 { 00:09:18.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.797 "dma_device_type": 2 00:09:18.797 }, 00:09:18.797 { 00:09:18.797 "dma_device_id": "system", 00:09:18.797 "dma_device_type": 1 00:09:18.797 }, 00:09:18.797 { 00:09:18.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.797 "dma_device_type": 2 00:09:18.797 } 00:09:18.797 ], 00:09:18.797 "driver_specific": { 00:09:18.797 "raid": { 00:09:18.797 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:18.797 "strip_size_kb": 64, 00:09:18.797 "state": "online", 00:09:18.797 "raid_level": "concat", 00:09:18.797 "superblock": true, 00:09:18.797 "num_base_bdevs": 2, 00:09:18.797 "num_base_bdevs_discovered": 2, 00:09:18.797 "num_base_bdevs_operational": 2, 00:09:18.797 "base_bdevs_list": [ 00:09:18.797 { 00:09:18.797 "name": "pt1", 00:09:18.797 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:18.797 "is_configured": true, 00:09:18.797 "data_offset": 2048, 00:09:18.797 "data_size": 63488 00:09:18.797 }, 00:09:18.797 { 00:09:18.798 "name": "pt2", 00:09:18.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:18.798 "is_configured": true, 00:09:18.798 "data_offset": 2048, 00:09:18.798 "data_size": 63488 00:09:18.798 } 00:09:18.798 ] 00:09:18.798 } 00:09:18.798 } 00:09:18.798 }' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:18.798 pt2' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.798 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:19.058 [2024-11-27 09:46:19.926064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63 ']' 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.058 [2024-11-27 09:46:19.973626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.058 [2024-11-27 09:46:19.973660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.058 [2024-11-27 09:46:19.973788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.058 [2024-11-27 09:46:19.973848] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.058 [2024-11-27 09:46:19.973862] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.058 09:46:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.058 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.059 [2024-11-27 09:46:20.105468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:19.059 [2024-11-27 09:46:20.107954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:19.059 [2024-11-27 09:46:20.108081] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:19.059 [2024-11-27 09:46:20.108160] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:19.059 [2024-11-27 09:46:20.108178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.059 [2024-11-27 09:46:20.108191] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:19.059 request: 00:09:19.059 { 00:09:19.059 "name": "raid_bdev1", 00:09:19.059 "raid_level": "concat", 00:09:19.059 "base_bdevs": [ 00:09:19.059 "malloc1", 00:09:19.059 "malloc2" 00:09:19.059 ], 00:09:19.059 "strip_size_kb": 64, 00:09:19.059 "superblock": false, 00:09:19.059 "method": "bdev_raid_create", 00:09:19.059 "req_id": 1 00:09:19.059 } 00:09:19.059 Got JSON-RPC error response 00:09:19.059 response: 00:09:19.059 { 00:09:19.059 "code": -17, 00:09:19.059 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:19.059 } 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.059 [2024-11-27 09:46:20.169367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:19.059 [2024-11-27 09:46:20.169458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.059 [2024-11-27 09:46:20.169483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:19.059 [2024-11-27 09:46:20.169496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.059 [2024-11-27 09:46:20.172517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.059 [2024-11-27 09:46:20.172567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:19.059 [2024-11-27 09:46:20.172686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:19.059 [2024-11-27 09:46:20.172763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:19.059 pt1 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.059 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.318 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.318 "name": "raid_bdev1", 00:09:19.318 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:19.318 "strip_size_kb": 64, 00:09:19.318 "state": "configuring", 00:09:19.318 "raid_level": "concat", 00:09:19.318 "superblock": true, 00:09:19.318 "num_base_bdevs": 2, 00:09:19.318 "num_base_bdevs_discovered": 1, 00:09:19.318 "num_base_bdevs_operational": 2, 00:09:19.318 "base_bdevs_list": [ 00:09:19.318 { 00:09:19.318 "name": "pt1", 00:09:19.318 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.318 "is_configured": true, 00:09:19.318 "data_offset": 2048, 00:09:19.318 "data_size": 63488 00:09:19.318 }, 00:09:19.318 { 00:09:19.318 "name": null, 00:09:19.318 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.318 "is_configured": false, 00:09:19.318 "data_offset": 2048, 00:09:19.318 "data_size": 63488 00:09:19.318 } 00:09:19.318 ] 00:09:19.318 }' 00:09:19.318 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.318 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.577 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:19.577 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:19.577 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.577 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:19.577 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.577 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.578 [2024-11-27 09:46:20.572658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:19.578 [2024-11-27 09:46:20.572761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:19.578 [2024-11-27 09:46:20.572788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:19.578 [2024-11-27 09:46:20.572801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:19.578 [2024-11-27 09:46:20.573444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:19.578 [2024-11-27 09:46:20.573477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:19.578 [2024-11-27 09:46:20.573583] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:19.578 [2024-11-27 09:46:20.573618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:19.578 [2024-11-27 09:46:20.573753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.578 [2024-11-27 09:46:20.573766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:19.578 [2024-11-27 09:46:20.574085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.578 [2024-11-27 09:46:20.574259] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.578 [2024-11-27 09:46:20.574270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:19.578 [2024-11-27 09:46:20.574443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.578 pt2 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.578 "name": "raid_bdev1", 00:09:19.578 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:19.578 "strip_size_kb": 64, 00:09:19.578 "state": "online", 00:09:19.578 "raid_level": "concat", 00:09:19.578 "superblock": true, 00:09:19.578 "num_base_bdevs": 2, 00:09:19.578 "num_base_bdevs_discovered": 2, 00:09:19.578 "num_base_bdevs_operational": 2, 00:09:19.578 "base_bdevs_list": [ 00:09:19.578 { 00:09:19.578 "name": "pt1", 00:09:19.578 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:19.578 "is_configured": true, 00:09:19.578 "data_offset": 2048, 00:09:19.578 "data_size": 63488 00:09:19.578 }, 00:09:19.578 { 00:09:19.578 "name": "pt2", 00:09:19.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:19.578 "is_configured": true, 00:09:19.578 "data_offset": 2048, 00:09:19.578 "data_size": 63488 00:09:19.578 } 00:09:19.578 ] 00:09:19.578 }' 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.578 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.147 09:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.147 [2024-11-27 09:46:21.000254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.147 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.147 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.147 "name": "raid_bdev1", 00:09:20.147 "aliases": [ 00:09:20.147 "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63" 00:09:20.147 ], 00:09:20.147 "product_name": "Raid Volume", 00:09:20.147 "block_size": 512, 00:09:20.147 "num_blocks": 126976, 00:09:20.147 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:20.147 "assigned_rate_limits": { 00:09:20.147 "rw_ios_per_sec": 0, 00:09:20.147 "rw_mbytes_per_sec": 0, 00:09:20.147 "r_mbytes_per_sec": 0, 00:09:20.147 "w_mbytes_per_sec": 0 00:09:20.147 }, 00:09:20.147 "claimed": false, 00:09:20.147 "zoned": false, 00:09:20.147 "supported_io_types": { 00:09:20.147 "read": true, 00:09:20.147 "write": true, 00:09:20.147 "unmap": true, 00:09:20.147 "flush": true, 00:09:20.147 "reset": true, 00:09:20.147 "nvme_admin": false, 00:09:20.147 "nvme_io": false, 00:09:20.147 "nvme_io_md": false, 00:09:20.147 "write_zeroes": true, 00:09:20.147 "zcopy": false, 00:09:20.147 "get_zone_info": false, 00:09:20.147 "zone_management": false, 00:09:20.147 "zone_append": false, 00:09:20.147 "compare": false, 00:09:20.147 "compare_and_write": false, 00:09:20.147 "abort": false, 00:09:20.147 "seek_hole": false, 00:09:20.147 "seek_data": false, 00:09:20.147 "copy": false, 00:09:20.147 "nvme_iov_md": false 00:09:20.147 }, 00:09:20.147 "memory_domains": [ 00:09:20.147 { 00:09:20.147 "dma_device_id": "system", 00:09:20.147 "dma_device_type": 1 00:09:20.147 }, 00:09:20.147 { 00:09:20.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.147 "dma_device_type": 2 00:09:20.147 }, 00:09:20.147 { 00:09:20.147 "dma_device_id": "system", 00:09:20.147 "dma_device_type": 1 00:09:20.147 }, 00:09:20.147 { 00:09:20.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.147 "dma_device_type": 2 00:09:20.147 } 00:09:20.147 ], 00:09:20.147 "driver_specific": { 00:09:20.148 "raid": { 00:09:20.148 "uuid": "9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63", 00:09:20.148 "strip_size_kb": 64, 00:09:20.148 "state": "online", 00:09:20.148 "raid_level": "concat", 00:09:20.148 "superblock": true, 00:09:20.148 "num_base_bdevs": 2, 00:09:20.148 "num_base_bdevs_discovered": 2, 00:09:20.148 "num_base_bdevs_operational": 2, 00:09:20.148 "base_bdevs_list": [ 00:09:20.148 { 00:09:20.148 "name": "pt1", 00:09:20.148 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.148 "is_configured": true, 00:09:20.148 "data_offset": 2048, 00:09:20.148 "data_size": 63488 00:09:20.148 }, 00:09:20.148 { 00:09:20.148 "name": "pt2", 00:09:20.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.148 "is_configured": true, 00:09:20.148 "data_offset": 2048, 00:09:20.148 "data_size": 63488 00:09:20.148 } 00:09:20.148 ] 00:09:20.148 } 00:09:20.148 } 00:09:20.148 }' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.148 pt2' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.148 [2024-11-27 09:46:21.183913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63 '!=' 9e31fecf-36b1-4e5a-b5b8-b8d04e1d2b63 ']' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62463 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62463 ']' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62463 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62463 00:09:20.148 killing process with pid 62463 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62463' 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62463 00:09:20.148 09:46:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62463 00:09:20.148 [2024-11-27 09:46:21.239393] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.148 [2024-11-27 09:46:21.239527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.148 [2024-11-27 09:46:21.239596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.148 [2024-11-27 09:46:21.239610] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.407 [2024-11-27 09:46:21.477535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.784 ************************************ 00:09:21.784 END TEST raid_superblock_test 00:09:21.784 ************************************ 00:09:21.784 09:46:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:21.784 00:09:21.784 real 0m4.526s 00:09:21.784 user 0m6.134s 00:09:21.785 sys 0m0.802s 00:09:21.785 09:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.785 09:46:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.785 09:46:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:21.785 09:46:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.785 09:46:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.785 09:46:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.785 ************************************ 00:09:21.785 START TEST raid_read_error_test 00:09:21.785 ************************************ 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UnuxW9HYHg 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62670 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62670 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62670 ']' 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.785 09:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.785 [2024-11-27 09:46:22.896922] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:21.785 [2024-11-27 09:46:22.897106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62670 ] 00:09:22.044 [2024-11-27 09:46:23.059794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.303 [2024-11-27 09:46:23.202366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.563 [2024-11-27 09:46:23.439697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.563 [2024-11-27 09:46:23.439752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 BaseBdev1_malloc 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 true 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-11-27 09:46:23.806342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.824 [2024-11-27 09:46:23.806422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.824 [2024-11-27 09:46:23.806448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.824 [2024-11-27 09:46:23.806459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.824 [2024-11-27 09:46:23.809110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.824 [2024-11-27 09:46:23.809252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.824 BaseBdev1 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 BaseBdev2_malloc 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 true 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-11-27 09:46:23.875230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.824 [2024-11-27 09:46:23.875352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.824 [2024-11-27 09:46:23.875377] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.824 [2024-11-27 09:46:23.875389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.824 [2024-11-27 09:46:23.878172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.824 [2024-11-27 09:46:23.878215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.824 BaseBdev2 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 [2024-11-27 09:46:23.883307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.824 [2024-11-27 09:46:23.885723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.824 [2024-11-27 09:46:23.885968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.824 [2024-11-27 09:46:23.885987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:22.824 [2024-11-27 09:46:23.886321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.824 [2024-11-27 09:46:23.886553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.824 [2024-11-27 09:46:23.886569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:22.824 [2024-11-27 09:46:23.886760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.824 "name": "raid_bdev1", 00:09:22.824 "uuid": "02a36263-17c3-46db-b574-67b2a617fb07", 00:09:22.824 "strip_size_kb": 64, 00:09:22.824 "state": "online", 00:09:22.824 "raid_level": "concat", 00:09:22.824 "superblock": true, 00:09:22.824 "num_base_bdevs": 2, 00:09:22.824 "num_base_bdevs_discovered": 2, 00:09:22.824 "num_base_bdevs_operational": 2, 00:09:22.824 "base_bdevs_list": [ 00:09:22.824 { 00:09:22.824 "name": "BaseBdev1", 00:09:22.824 "uuid": "b45c0bba-7f31-5e26-b42c-e49e063677f8", 00:09:22.824 "is_configured": true, 00:09:22.824 "data_offset": 2048, 00:09:22.824 "data_size": 63488 00:09:22.824 }, 00:09:22.824 { 00:09:22.824 "name": "BaseBdev2", 00:09:22.824 "uuid": "3f2cb529-a404-5c22-9d2d-8b44d3bc3775", 00:09:22.824 "is_configured": true, 00:09:22.824 "data_offset": 2048, 00:09:22.824 "data_size": 63488 00:09:22.824 } 00:09:22.824 ] 00:09:22.824 }' 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.824 09:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.394 09:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:23.394 09:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.394 [2024-11-27 09:46:24.339928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.335 "name": "raid_bdev1", 00:09:24.335 "uuid": "02a36263-17c3-46db-b574-67b2a617fb07", 00:09:24.335 "strip_size_kb": 64, 00:09:24.335 "state": "online", 00:09:24.335 "raid_level": "concat", 00:09:24.335 "superblock": true, 00:09:24.335 "num_base_bdevs": 2, 00:09:24.335 "num_base_bdevs_discovered": 2, 00:09:24.335 "num_base_bdevs_operational": 2, 00:09:24.335 "base_bdevs_list": [ 00:09:24.335 { 00:09:24.335 "name": "BaseBdev1", 00:09:24.335 "uuid": "b45c0bba-7f31-5e26-b42c-e49e063677f8", 00:09:24.335 "is_configured": true, 00:09:24.335 "data_offset": 2048, 00:09:24.335 "data_size": 63488 00:09:24.335 }, 00:09:24.335 { 00:09:24.335 "name": "BaseBdev2", 00:09:24.335 "uuid": "3f2cb529-a404-5c22-9d2d-8b44d3bc3775", 00:09:24.335 "is_configured": true, 00:09:24.335 "data_offset": 2048, 00:09:24.335 "data_size": 63488 00:09:24.335 } 00:09:24.335 ] 00:09:24.335 }' 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.335 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.594 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.595 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.595 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.595 [2024-11-27 09:46:25.717579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.595 [2024-11-27 09:46:25.717699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.595 [2024-11-27 09:46:25.720851] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.595 [2024-11-27 09:46:25.720959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.595 [2024-11-27 09:46:25.721028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.595 [2024-11-27 09:46:25.721099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:24.595 { 00:09:24.595 "results": [ 00:09:24.595 { 00:09:24.595 "job": "raid_bdev1", 00:09:24.595 "core_mask": "0x1", 00:09:24.595 "workload": "randrw", 00:09:24.595 "percentage": 50, 00:09:24.595 "status": "finished", 00:09:24.595 "queue_depth": 1, 00:09:24.595 "io_size": 131072, 00:09:24.595 "runtime": 1.378277, 00:09:24.595 "iops": 13214.324841813366, 00:09:24.595 "mibps": 1651.7906052266708, 00:09:24.595 "io_failed": 1, 00:09:24.595 "io_timeout": 0, 00:09:24.595 "avg_latency_us": 106.20710917222368, 00:09:24.595 "min_latency_us": 26.494323144104804, 00:09:24.595 "max_latency_us": 1488.1537117903931 00:09:24.595 } 00:09:24.595 ], 00:09:24.595 "core_count": 1 00:09:24.595 } 00:09:24.595 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.595 09:46:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62670 00:09:24.595 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62670 ']' 00:09:24.595 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62670 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62670 00:09:24.854 killing process with pid 62670 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62670' 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62670 00:09:24.854 09:46:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62670 00:09:24.854 [2024-11-27 09:46:25.758565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:24.854 [2024-11-27 09:46:25.918360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UnuxW9HYHg 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.235 09:46:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:26.236 00:09:26.236 real 0m4.445s 00:09:26.236 user 0m5.158s 00:09:26.236 sys 0m0.599s 00:09:26.236 09:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.236 09:46:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.236 ************************************ 00:09:26.236 END TEST raid_read_error_test 00:09:26.236 ************************************ 00:09:26.236 09:46:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:26.236 09:46:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.236 09:46:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.236 09:46:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.236 ************************************ 00:09:26.236 START TEST raid_write_error_test 00:09:26.236 ************************************ 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.bgCewbTFLh 00:09:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62816 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62816 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62816 ']' 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.236 09:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.495 [2024-11-27 09:46:27.394179] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:26.495 [2024-11-27 09:46:27.394344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62816 ] 00:09:26.495 [2024-11-27 09:46:27.577140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.755 [2024-11-27 09:46:27.720478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.014 [2024-11-27 09:46:27.973070] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.014 [2024-11-27 09:46:27.973136] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 BaseBdev1_malloc 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 true 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 [2024-11-27 09:46:28.325294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.274 [2024-11-27 09:46:28.325367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.274 [2024-11-27 09:46:28.325393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.274 [2024-11-27 09:46:28.325406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.274 [2024-11-27 09:46:28.328210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.274 [2024-11-27 09:46:28.328322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.274 BaseBdev1 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 BaseBdev2_malloc 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 true 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.274 [2024-11-27 09:46:28.397012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.274 [2024-11-27 09:46:28.397086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.274 [2024-11-27 09:46:28.397107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.274 [2024-11-27 09:46:28.397120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.274 [2024-11-27 09:46:28.399785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.274 [2024-11-27 09:46:28.399889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.274 BaseBdev2 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.274 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.533 [2024-11-27 09:46:28.409086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.533 [2024-11-27 09:46:28.411521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.533 [2024-11-27 09:46:28.411800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.533 [2024-11-27 09:46:28.411868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:27.533 [2024-11-27 09:46:28.412220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:27.533 [2024-11-27 09:46:28.412487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.533 [2024-11-27 09:46:28.412540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:27.533 [2024-11-27 09:46:28.412786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.533 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.534 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.534 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.534 "name": "raid_bdev1", 00:09:27.534 "uuid": "a8a24f6f-5731-4b92-b28e-e67863dab69a", 00:09:27.534 "strip_size_kb": 64, 00:09:27.534 "state": "online", 00:09:27.534 "raid_level": "concat", 00:09:27.534 "superblock": true, 00:09:27.534 "num_base_bdevs": 2, 00:09:27.534 "num_base_bdevs_discovered": 2, 00:09:27.534 "num_base_bdevs_operational": 2, 00:09:27.534 "base_bdevs_list": [ 00:09:27.534 { 00:09:27.534 "name": "BaseBdev1", 00:09:27.534 "uuid": "fae83cea-a608-5237-94c3-ece11a3fa09e", 00:09:27.534 "is_configured": true, 00:09:27.534 "data_offset": 2048, 00:09:27.534 "data_size": 63488 00:09:27.534 }, 00:09:27.534 { 00:09:27.534 "name": "BaseBdev2", 00:09:27.534 "uuid": "c3c1e158-cb33-5342-b75d-da1caa4635da", 00:09:27.534 "is_configured": true, 00:09:27.534 "data_offset": 2048, 00:09:27.534 "data_size": 63488 00:09:27.534 } 00:09:27.534 ] 00:09:27.534 }' 00:09:27.534 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.534 09:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.793 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.793 09:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.050 [2024-11-27 09:46:28.941599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.986 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.987 "name": "raid_bdev1", 00:09:28.987 "uuid": "a8a24f6f-5731-4b92-b28e-e67863dab69a", 00:09:28.987 "strip_size_kb": 64, 00:09:28.987 "state": "online", 00:09:28.987 "raid_level": "concat", 00:09:28.987 "superblock": true, 00:09:28.987 "num_base_bdevs": 2, 00:09:28.987 "num_base_bdevs_discovered": 2, 00:09:28.987 "num_base_bdevs_operational": 2, 00:09:28.987 "base_bdevs_list": [ 00:09:28.987 { 00:09:28.987 "name": "BaseBdev1", 00:09:28.987 "uuid": "fae83cea-a608-5237-94c3-ece11a3fa09e", 00:09:28.987 "is_configured": true, 00:09:28.987 "data_offset": 2048, 00:09:28.987 "data_size": 63488 00:09:28.987 }, 00:09:28.987 { 00:09:28.987 "name": "BaseBdev2", 00:09:28.987 "uuid": "c3c1e158-cb33-5342-b75d-da1caa4635da", 00:09:28.987 "is_configured": true, 00:09:28.987 "data_offset": 2048, 00:09:28.987 "data_size": 63488 00:09:28.987 } 00:09:28.987 ] 00:09:28.987 }' 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.987 09:46:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.247 [2024-11-27 09:46:30.335580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.247 [2024-11-27 09:46:30.335624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.247 [2024-11-27 09:46:30.338667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.247 [2024-11-27 09:46:30.338719] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.247 [2024-11-27 09:46:30.338758] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.247 [2024-11-27 09:46:30.338776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:29.247 { 00:09:29.247 "results": [ 00:09:29.247 { 00:09:29.247 "job": "raid_bdev1", 00:09:29.247 "core_mask": "0x1", 00:09:29.247 "workload": "randrw", 00:09:29.247 "percentage": 50, 00:09:29.247 "status": "finished", 00:09:29.247 "queue_depth": 1, 00:09:29.247 "io_size": 131072, 00:09:29.247 "runtime": 1.39424, 00:09:29.247 "iops": 12804.108331420703, 00:09:29.247 "mibps": 1600.5135414275878, 00:09:29.247 "io_failed": 1, 00:09:29.247 "io_timeout": 0, 00:09:29.247 "avg_latency_us": 109.5719113174868, 00:09:29.247 "min_latency_us": 26.829694323144103, 00:09:29.247 "max_latency_us": 1473.844541484716 00:09:29.247 } 00:09:29.247 ], 00:09:29.247 "core_count": 1 00:09:29.247 } 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62816 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62816 ']' 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62816 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.247 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62816 00:09:29.507 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.507 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.507 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62816' 00:09:29.507 killing process with pid 62816 00:09:29.507 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62816 00:09:29.507 09:46:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62816 00:09:29.507 [2024-11-27 09:46:30.381039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.507 [2024-11-27 09:46:30.536045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.bgCewbTFLh 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:30.887 ************************************ 00:09:30.887 END TEST raid_write_error_test 00:09:30.887 ************************************ 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:30.887 00:09:30.887 real 0m4.609s 00:09:30.887 user 0m5.430s 00:09:30.887 sys 0m0.650s 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.887 09:46:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.887 09:46:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:30.887 09:46:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:30.887 09:46:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:30.887 09:46:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.887 09:46:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:30.887 ************************************ 00:09:30.887 START TEST raid_state_function_test 00:09:30.887 ************************************ 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:30.887 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:30.888 Process raid pid: 62954 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62954 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62954' 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62954 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62954 ']' 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.888 09:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.147 [2024-11-27 09:46:32.077148] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:31.147 [2024-11-27 09:46:32.077416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.147 [2024-11-27 09:46:32.239683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.407 [2024-11-27 09:46:32.386663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.666 [2024-11-27 09:46:32.640516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.666 [2024-11-27 09:46:32.640586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.927 [2024-11-27 09:46:32.951474] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.927 [2024-11-27 09:46:32.951539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.927 [2024-11-27 09:46:32.951550] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.927 [2024-11-27 09:46:32.951577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.927 09:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.927 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.927 "name": "Existed_Raid", 00:09:31.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.927 "strip_size_kb": 0, 00:09:31.927 "state": "configuring", 00:09:31.927 "raid_level": "raid1", 00:09:31.927 "superblock": false, 00:09:31.927 "num_base_bdevs": 2, 00:09:31.927 "num_base_bdevs_discovered": 0, 00:09:31.927 "num_base_bdevs_operational": 2, 00:09:31.927 "base_bdevs_list": [ 00:09:31.927 { 00:09:31.927 "name": "BaseBdev1", 00:09:31.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.927 "is_configured": false, 00:09:31.927 "data_offset": 0, 00:09:31.927 "data_size": 0 00:09:31.927 }, 00:09:31.927 { 00:09:31.927 "name": "BaseBdev2", 00:09:31.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.927 "is_configured": false, 00:09:31.927 "data_offset": 0, 00:09:31.927 "data_size": 0 00:09:31.927 } 00:09:31.927 ] 00:09:31.927 }' 00:09:31.927 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.927 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.497 [2024-11-27 09:46:33.386667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.497 [2024-11-27 09:46:33.386766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.497 [2024-11-27 09:46:33.394626] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.497 [2024-11-27 09:46:33.394722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.497 [2024-11-27 09:46:33.394752] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.497 [2024-11-27 09:46:33.394780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.497 [2024-11-27 09:46:33.446249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.497 BaseBdev1 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.497 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.498 [ 00:09:32.498 { 00:09:32.498 "name": "BaseBdev1", 00:09:32.498 "aliases": [ 00:09:32.498 "73ce0675-bc59-446a-8b7e-cedeb2ab7916" 00:09:32.498 ], 00:09:32.498 "product_name": "Malloc disk", 00:09:32.498 "block_size": 512, 00:09:32.498 "num_blocks": 65536, 00:09:32.498 "uuid": "73ce0675-bc59-446a-8b7e-cedeb2ab7916", 00:09:32.498 "assigned_rate_limits": { 00:09:32.498 "rw_ios_per_sec": 0, 00:09:32.498 "rw_mbytes_per_sec": 0, 00:09:32.498 "r_mbytes_per_sec": 0, 00:09:32.498 "w_mbytes_per_sec": 0 00:09:32.498 }, 00:09:32.498 "claimed": true, 00:09:32.498 "claim_type": "exclusive_write", 00:09:32.498 "zoned": false, 00:09:32.498 "supported_io_types": { 00:09:32.498 "read": true, 00:09:32.498 "write": true, 00:09:32.498 "unmap": true, 00:09:32.498 "flush": true, 00:09:32.498 "reset": true, 00:09:32.498 "nvme_admin": false, 00:09:32.498 "nvme_io": false, 00:09:32.498 "nvme_io_md": false, 00:09:32.498 "write_zeroes": true, 00:09:32.498 "zcopy": true, 00:09:32.498 "get_zone_info": false, 00:09:32.498 "zone_management": false, 00:09:32.498 "zone_append": false, 00:09:32.498 "compare": false, 00:09:32.498 "compare_and_write": false, 00:09:32.498 "abort": true, 00:09:32.498 "seek_hole": false, 00:09:32.498 "seek_data": false, 00:09:32.498 "copy": true, 00:09:32.498 "nvme_iov_md": false 00:09:32.498 }, 00:09:32.498 "memory_domains": [ 00:09:32.498 { 00:09:32.498 "dma_device_id": "system", 00:09:32.498 "dma_device_type": 1 00:09:32.498 }, 00:09:32.498 { 00:09:32.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.498 "dma_device_type": 2 00:09:32.498 } 00:09:32.498 ], 00:09:32.498 "driver_specific": {} 00:09:32.498 } 00:09:32.498 ] 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.498 "name": "Existed_Raid", 00:09:32.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.498 "strip_size_kb": 0, 00:09:32.498 "state": "configuring", 00:09:32.498 "raid_level": "raid1", 00:09:32.498 "superblock": false, 00:09:32.498 "num_base_bdevs": 2, 00:09:32.498 "num_base_bdevs_discovered": 1, 00:09:32.498 "num_base_bdevs_operational": 2, 00:09:32.498 "base_bdevs_list": [ 00:09:32.498 { 00:09:32.498 "name": "BaseBdev1", 00:09:32.498 "uuid": "73ce0675-bc59-446a-8b7e-cedeb2ab7916", 00:09:32.498 "is_configured": true, 00:09:32.498 "data_offset": 0, 00:09:32.498 "data_size": 65536 00:09:32.498 }, 00:09:32.498 { 00:09:32.498 "name": "BaseBdev2", 00:09:32.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.498 "is_configured": false, 00:09:32.498 "data_offset": 0, 00:09:32.498 "data_size": 0 00:09:32.498 } 00:09:32.498 ] 00:09:32.498 }' 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.498 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.097 [2024-11-27 09:46:33.913513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.097 [2024-11-27 09:46:33.913646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.097 [2024-11-27 09:46:33.921542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.097 [2024-11-27 09:46:33.923955] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.097 [2024-11-27 09:46:33.924061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.097 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.097 "name": "Existed_Raid", 00:09:33.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.098 "strip_size_kb": 0, 00:09:33.098 "state": "configuring", 00:09:33.098 "raid_level": "raid1", 00:09:33.098 "superblock": false, 00:09:33.098 "num_base_bdevs": 2, 00:09:33.098 "num_base_bdevs_discovered": 1, 00:09:33.098 "num_base_bdevs_operational": 2, 00:09:33.098 "base_bdevs_list": [ 00:09:33.098 { 00:09:33.098 "name": "BaseBdev1", 00:09:33.098 "uuid": "73ce0675-bc59-446a-8b7e-cedeb2ab7916", 00:09:33.098 "is_configured": true, 00:09:33.098 "data_offset": 0, 00:09:33.098 "data_size": 65536 00:09:33.098 }, 00:09:33.098 { 00:09:33.098 "name": "BaseBdev2", 00:09:33.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.098 "is_configured": false, 00:09:33.098 "data_offset": 0, 00:09:33.098 "data_size": 0 00:09:33.098 } 00:09:33.098 ] 00:09:33.098 }' 00:09:33.098 09:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.098 09:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.370 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.370 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.370 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.370 [2024-11-27 09:46:34.431048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.371 [2024-11-27 09:46:34.431128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:33.371 [2024-11-27 09:46:34.431136] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:33.371 [2024-11-27 09:46:34.431444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:33.371 [2024-11-27 09:46:34.431643] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:33.371 [2024-11-27 09:46:34.431658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:33.371 [2024-11-27 09:46:34.432014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.371 BaseBdev2 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.371 [ 00:09:33.371 { 00:09:33.371 "name": "BaseBdev2", 00:09:33.371 "aliases": [ 00:09:33.371 "d1b18d8c-e65a-4bdb-a62e-d4fab9e12d7e" 00:09:33.371 ], 00:09:33.371 "product_name": "Malloc disk", 00:09:33.371 "block_size": 512, 00:09:33.371 "num_blocks": 65536, 00:09:33.371 "uuid": "d1b18d8c-e65a-4bdb-a62e-d4fab9e12d7e", 00:09:33.371 "assigned_rate_limits": { 00:09:33.371 "rw_ios_per_sec": 0, 00:09:33.371 "rw_mbytes_per_sec": 0, 00:09:33.371 "r_mbytes_per_sec": 0, 00:09:33.371 "w_mbytes_per_sec": 0 00:09:33.371 }, 00:09:33.371 "claimed": true, 00:09:33.371 "claim_type": "exclusive_write", 00:09:33.371 "zoned": false, 00:09:33.371 "supported_io_types": { 00:09:33.371 "read": true, 00:09:33.371 "write": true, 00:09:33.371 "unmap": true, 00:09:33.371 "flush": true, 00:09:33.371 "reset": true, 00:09:33.371 "nvme_admin": false, 00:09:33.371 "nvme_io": false, 00:09:33.371 "nvme_io_md": false, 00:09:33.371 "write_zeroes": true, 00:09:33.371 "zcopy": true, 00:09:33.371 "get_zone_info": false, 00:09:33.371 "zone_management": false, 00:09:33.371 "zone_append": false, 00:09:33.371 "compare": false, 00:09:33.371 "compare_and_write": false, 00:09:33.371 "abort": true, 00:09:33.371 "seek_hole": false, 00:09:33.371 "seek_data": false, 00:09:33.371 "copy": true, 00:09:33.371 "nvme_iov_md": false 00:09:33.371 }, 00:09:33.371 "memory_domains": [ 00:09:33.371 { 00:09:33.371 "dma_device_id": "system", 00:09:33.371 "dma_device_type": 1 00:09:33.371 }, 00:09:33.371 { 00:09:33.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.371 "dma_device_type": 2 00:09:33.371 } 00:09:33.371 ], 00:09:33.371 "driver_specific": {} 00:09:33.371 } 00:09:33.371 ] 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.371 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.631 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.631 "name": "Existed_Raid", 00:09:33.631 "uuid": "740cb73b-7265-4330-81c8-d9a24f940799", 00:09:33.631 "strip_size_kb": 0, 00:09:33.631 "state": "online", 00:09:33.631 "raid_level": "raid1", 00:09:33.631 "superblock": false, 00:09:33.631 "num_base_bdevs": 2, 00:09:33.631 "num_base_bdevs_discovered": 2, 00:09:33.631 "num_base_bdevs_operational": 2, 00:09:33.631 "base_bdevs_list": [ 00:09:33.631 { 00:09:33.631 "name": "BaseBdev1", 00:09:33.631 "uuid": "73ce0675-bc59-446a-8b7e-cedeb2ab7916", 00:09:33.631 "is_configured": true, 00:09:33.631 "data_offset": 0, 00:09:33.631 "data_size": 65536 00:09:33.631 }, 00:09:33.631 { 00:09:33.631 "name": "BaseBdev2", 00:09:33.631 "uuid": "d1b18d8c-e65a-4bdb-a62e-d4fab9e12d7e", 00:09:33.631 "is_configured": true, 00:09:33.631 "data_offset": 0, 00:09:33.631 "data_size": 65536 00:09:33.631 } 00:09:33.631 ] 00:09:33.631 }' 00:09:33.631 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.631 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.892 [2024-11-27 09:46:34.898593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.892 "name": "Existed_Raid", 00:09:33.892 "aliases": [ 00:09:33.892 "740cb73b-7265-4330-81c8-d9a24f940799" 00:09:33.892 ], 00:09:33.892 "product_name": "Raid Volume", 00:09:33.892 "block_size": 512, 00:09:33.892 "num_blocks": 65536, 00:09:33.892 "uuid": "740cb73b-7265-4330-81c8-d9a24f940799", 00:09:33.892 "assigned_rate_limits": { 00:09:33.892 "rw_ios_per_sec": 0, 00:09:33.892 "rw_mbytes_per_sec": 0, 00:09:33.892 "r_mbytes_per_sec": 0, 00:09:33.892 "w_mbytes_per_sec": 0 00:09:33.892 }, 00:09:33.892 "claimed": false, 00:09:33.892 "zoned": false, 00:09:33.892 "supported_io_types": { 00:09:33.892 "read": true, 00:09:33.892 "write": true, 00:09:33.892 "unmap": false, 00:09:33.892 "flush": false, 00:09:33.892 "reset": true, 00:09:33.892 "nvme_admin": false, 00:09:33.892 "nvme_io": false, 00:09:33.892 "nvme_io_md": false, 00:09:33.892 "write_zeroes": true, 00:09:33.892 "zcopy": false, 00:09:33.892 "get_zone_info": false, 00:09:33.892 "zone_management": false, 00:09:33.892 "zone_append": false, 00:09:33.892 "compare": false, 00:09:33.892 "compare_and_write": false, 00:09:33.892 "abort": false, 00:09:33.892 "seek_hole": false, 00:09:33.892 "seek_data": false, 00:09:33.892 "copy": false, 00:09:33.892 "nvme_iov_md": false 00:09:33.892 }, 00:09:33.892 "memory_domains": [ 00:09:33.892 { 00:09:33.892 "dma_device_id": "system", 00:09:33.892 "dma_device_type": 1 00:09:33.892 }, 00:09:33.892 { 00:09:33.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.892 "dma_device_type": 2 00:09:33.892 }, 00:09:33.892 { 00:09:33.892 "dma_device_id": "system", 00:09:33.892 "dma_device_type": 1 00:09:33.892 }, 00:09:33.892 { 00:09:33.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.892 "dma_device_type": 2 00:09:33.892 } 00:09:33.892 ], 00:09:33.892 "driver_specific": { 00:09:33.892 "raid": { 00:09:33.892 "uuid": "740cb73b-7265-4330-81c8-d9a24f940799", 00:09:33.892 "strip_size_kb": 0, 00:09:33.892 "state": "online", 00:09:33.892 "raid_level": "raid1", 00:09:33.892 "superblock": false, 00:09:33.892 "num_base_bdevs": 2, 00:09:33.892 "num_base_bdevs_discovered": 2, 00:09:33.892 "num_base_bdevs_operational": 2, 00:09:33.892 "base_bdevs_list": [ 00:09:33.892 { 00:09:33.892 "name": "BaseBdev1", 00:09:33.892 "uuid": "73ce0675-bc59-446a-8b7e-cedeb2ab7916", 00:09:33.892 "is_configured": true, 00:09:33.892 "data_offset": 0, 00:09:33.892 "data_size": 65536 00:09:33.892 }, 00:09:33.892 { 00:09:33.892 "name": "BaseBdev2", 00:09:33.892 "uuid": "d1b18d8c-e65a-4bdb-a62e-d4fab9e12d7e", 00:09:33.892 "is_configured": true, 00:09:33.892 "data_offset": 0, 00:09:33.892 "data_size": 65536 00:09:33.892 } 00:09:33.892 ] 00:09:33.892 } 00:09:33.892 } 00:09:33.892 }' 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.892 BaseBdev2' 00:09:33.892 09:46:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.152 [2024-11-27 09:46:35.137975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.152 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.153 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.422 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.422 "name": "Existed_Raid", 00:09:34.422 "uuid": "740cb73b-7265-4330-81c8-d9a24f940799", 00:09:34.422 "strip_size_kb": 0, 00:09:34.422 "state": "online", 00:09:34.422 "raid_level": "raid1", 00:09:34.422 "superblock": false, 00:09:34.422 "num_base_bdevs": 2, 00:09:34.422 "num_base_bdevs_discovered": 1, 00:09:34.422 "num_base_bdevs_operational": 1, 00:09:34.422 "base_bdevs_list": [ 00:09:34.422 { 00:09:34.422 "name": null, 00:09:34.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.422 "is_configured": false, 00:09:34.422 "data_offset": 0, 00:09:34.422 "data_size": 65536 00:09:34.422 }, 00:09:34.422 { 00:09:34.422 "name": "BaseBdev2", 00:09:34.422 "uuid": "d1b18d8c-e65a-4bdb-a62e-d4fab9e12d7e", 00:09:34.422 "is_configured": true, 00:09:34.422 "data_offset": 0, 00:09:34.422 "data_size": 65536 00:09:34.422 } 00:09:34.422 ] 00:09:34.422 }' 00:09:34.422 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.422 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.682 [2024-11-27 09:46:35.668232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.682 [2024-11-27 09:46:35.668445] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.682 [2024-11-27 09:46:35.774151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.682 [2024-11-27 09:46:35.774344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.682 [2024-11-27 09:46:35.774392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.682 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62954 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62954 ']' 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62954 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62954 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62954' 00:09:34.943 killing process with pid 62954 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62954 00:09:34.943 [2024-11-27 09:46:35.866877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:34.943 09:46:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62954 00:09:34.943 [2024-11-27 09:46:35.885838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:36.325 00:09:36.325 real 0m5.183s 00:09:36.325 user 0m7.276s 00:09:36.325 sys 0m0.905s 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.325 ************************************ 00:09:36.325 END TEST raid_state_function_test 00:09:36.325 ************************************ 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 09:46:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:36.325 09:46:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:36.325 09:46:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.325 09:46:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.325 ************************************ 00:09:36.325 START TEST raid_state_function_test_sb 00:09:36.325 ************************************ 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63207 00:09:36.325 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63207' 00:09:36.326 Process raid pid: 63207 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63207 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63207 ']' 00:09:36.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.326 09:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.326 [2024-11-27 09:46:37.326400] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:36.326 [2024-11-27 09:46:37.326545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.586 [2024-11-27 09:46:37.504295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.586 [2024-11-27 09:46:37.650052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.845 [2024-11-27 09:46:37.892120] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.845 [2024-11-27 09:46:37.892188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.106 [2024-11-27 09:46:38.197266] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.106 [2024-11-27 09:46:38.197427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.106 [2024-11-27 09:46:38.197446] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.106 [2024-11-27 09:46:38.197460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.106 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.107 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.107 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.107 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.367 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.367 "name": "Existed_Raid", 00:09:37.367 "uuid": "dba9bb35-6d11-4d95-b9f2-c3cc0fcb14d2", 00:09:37.367 "strip_size_kb": 0, 00:09:37.367 "state": "configuring", 00:09:37.367 "raid_level": "raid1", 00:09:37.367 "superblock": true, 00:09:37.367 "num_base_bdevs": 2, 00:09:37.367 "num_base_bdevs_discovered": 0, 00:09:37.367 "num_base_bdevs_operational": 2, 00:09:37.367 "base_bdevs_list": [ 00:09:37.367 { 00:09:37.367 "name": "BaseBdev1", 00:09:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.367 "is_configured": false, 00:09:37.367 "data_offset": 0, 00:09:37.367 "data_size": 0 00:09:37.367 }, 00:09:37.367 { 00:09:37.367 "name": "BaseBdev2", 00:09:37.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.367 "is_configured": false, 00:09:37.367 "data_offset": 0, 00:09:37.367 "data_size": 0 00:09:37.367 } 00:09:37.367 ] 00:09:37.367 }' 00:09:37.367 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.367 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 [2024-11-27 09:46:38.644389] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.627 [2024-11-27 09:46:38.644501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 [2024-11-27 09:46:38.656358] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:37.627 [2024-11-27 09:46:38.656459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:37.627 [2024-11-27 09:46:38.656490] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:37.627 [2024-11-27 09:46:38.656519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 [2024-11-27 09:46:38.711402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.627 BaseBdev1 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.627 [ 00:09:37.627 { 00:09:37.627 "name": "BaseBdev1", 00:09:37.627 "aliases": [ 00:09:37.627 "40ec8cb6-b26d-405c-a33b-c5c7860f5501" 00:09:37.627 ], 00:09:37.627 "product_name": "Malloc disk", 00:09:37.627 "block_size": 512, 00:09:37.627 "num_blocks": 65536, 00:09:37.627 "uuid": "40ec8cb6-b26d-405c-a33b-c5c7860f5501", 00:09:37.627 "assigned_rate_limits": { 00:09:37.627 "rw_ios_per_sec": 0, 00:09:37.627 "rw_mbytes_per_sec": 0, 00:09:37.627 "r_mbytes_per_sec": 0, 00:09:37.627 "w_mbytes_per_sec": 0 00:09:37.627 }, 00:09:37.627 "claimed": true, 00:09:37.627 "claim_type": "exclusive_write", 00:09:37.627 "zoned": false, 00:09:37.627 "supported_io_types": { 00:09:37.627 "read": true, 00:09:37.627 "write": true, 00:09:37.627 "unmap": true, 00:09:37.627 "flush": true, 00:09:37.627 "reset": true, 00:09:37.627 "nvme_admin": false, 00:09:37.627 "nvme_io": false, 00:09:37.627 "nvme_io_md": false, 00:09:37.627 "write_zeroes": true, 00:09:37.627 "zcopy": true, 00:09:37.627 "get_zone_info": false, 00:09:37.627 "zone_management": false, 00:09:37.627 "zone_append": false, 00:09:37.627 "compare": false, 00:09:37.627 "compare_and_write": false, 00:09:37.627 "abort": true, 00:09:37.627 "seek_hole": false, 00:09:37.627 "seek_data": false, 00:09:37.627 "copy": true, 00:09:37.627 "nvme_iov_md": false 00:09:37.627 }, 00:09:37.627 "memory_domains": [ 00:09:37.627 { 00:09:37.627 "dma_device_id": "system", 00:09:37.627 "dma_device_type": 1 00:09:37.627 }, 00:09:37.627 { 00:09:37.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.627 "dma_device_type": 2 00:09:37.627 } 00:09:37.627 ], 00:09:37.627 "driver_specific": {} 00:09:37.627 } 00:09:37.627 ] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.627 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:37.628 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:37.628 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:37.628 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.628 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.628 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.628 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.888 "name": "Existed_Raid", 00:09:37.888 "uuid": "8904c044-5ab7-4735-8d9c-0e73ac1e0360", 00:09:37.888 "strip_size_kb": 0, 00:09:37.888 "state": "configuring", 00:09:37.888 "raid_level": "raid1", 00:09:37.888 "superblock": true, 00:09:37.888 "num_base_bdevs": 2, 00:09:37.888 "num_base_bdevs_discovered": 1, 00:09:37.888 "num_base_bdevs_operational": 2, 00:09:37.888 "base_bdevs_list": [ 00:09:37.888 { 00:09:37.888 "name": "BaseBdev1", 00:09:37.888 "uuid": "40ec8cb6-b26d-405c-a33b-c5c7860f5501", 00:09:37.888 "is_configured": true, 00:09:37.888 "data_offset": 2048, 00:09:37.888 "data_size": 63488 00:09:37.888 }, 00:09:37.888 { 00:09:37.888 "name": "BaseBdev2", 00:09:37.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.888 "is_configured": false, 00:09:37.888 "data_offset": 0, 00:09:37.888 "data_size": 0 00:09:37.888 } 00:09:37.888 ] 00:09:37.888 }' 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.888 09:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.148 [2024-11-27 09:46:39.182729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.148 [2024-11-27 09:46:39.182806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.148 [2024-11-27 09:46:39.194764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.148 [2024-11-27 09:46:39.197149] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.148 [2024-11-27 09:46:39.197199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.148 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.148 "name": "Existed_Raid", 00:09:38.149 "uuid": "ae2b2b95-b610-458c-a38f-4f06820a6f53", 00:09:38.149 "strip_size_kb": 0, 00:09:38.149 "state": "configuring", 00:09:38.149 "raid_level": "raid1", 00:09:38.149 "superblock": true, 00:09:38.149 "num_base_bdevs": 2, 00:09:38.149 "num_base_bdevs_discovered": 1, 00:09:38.149 "num_base_bdevs_operational": 2, 00:09:38.149 "base_bdevs_list": [ 00:09:38.149 { 00:09:38.149 "name": "BaseBdev1", 00:09:38.149 "uuid": "40ec8cb6-b26d-405c-a33b-c5c7860f5501", 00:09:38.149 "is_configured": true, 00:09:38.149 "data_offset": 2048, 00:09:38.149 "data_size": 63488 00:09:38.149 }, 00:09:38.149 { 00:09:38.149 "name": "BaseBdev2", 00:09:38.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.149 "is_configured": false, 00:09:38.149 "data_offset": 0, 00:09:38.149 "data_size": 0 00:09:38.149 } 00:09:38.149 ] 00:09:38.149 }' 00:09:38.149 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.149 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.718 [2024-11-27 09:46:39.675426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.718 [2024-11-27 09:46:39.675888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:38.718 [2024-11-27 09:46:39.675957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.718 [2024-11-27 09:46:39.676365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.718 BaseBdev2 00:09:38.718 [2024-11-27 09:46:39.676627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:38.718 [2024-11-27 09:46:39.676645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:38.718 [2024-11-27 09:46:39.676835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.718 [ 00:09:38.718 { 00:09:38.718 "name": "BaseBdev2", 00:09:38.718 "aliases": [ 00:09:38.718 "6ef0e5f7-772f-4aea-9e83-a34555d1b0d5" 00:09:38.718 ], 00:09:38.718 "product_name": "Malloc disk", 00:09:38.718 "block_size": 512, 00:09:38.718 "num_blocks": 65536, 00:09:38.718 "uuid": "6ef0e5f7-772f-4aea-9e83-a34555d1b0d5", 00:09:38.718 "assigned_rate_limits": { 00:09:38.718 "rw_ios_per_sec": 0, 00:09:38.718 "rw_mbytes_per_sec": 0, 00:09:38.718 "r_mbytes_per_sec": 0, 00:09:38.718 "w_mbytes_per_sec": 0 00:09:38.718 }, 00:09:38.718 "claimed": true, 00:09:38.718 "claim_type": "exclusive_write", 00:09:38.718 "zoned": false, 00:09:38.718 "supported_io_types": { 00:09:38.718 "read": true, 00:09:38.718 "write": true, 00:09:38.718 "unmap": true, 00:09:38.718 "flush": true, 00:09:38.718 "reset": true, 00:09:38.718 "nvme_admin": false, 00:09:38.718 "nvme_io": false, 00:09:38.718 "nvme_io_md": false, 00:09:38.718 "write_zeroes": true, 00:09:38.718 "zcopy": true, 00:09:38.718 "get_zone_info": false, 00:09:38.718 "zone_management": false, 00:09:38.718 "zone_append": false, 00:09:38.718 "compare": false, 00:09:38.718 "compare_and_write": false, 00:09:38.718 "abort": true, 00:09:38.718 "seek_hole": false, 00:09:38.718 "seek_data": false, 00:09:38.718 "copy": true, 00:09:38.718 "nvme_iov_md": false 00:09:38.718 }, 00:09:38.718 "memory_domains": [ 00:09:38.718 { 00:09:38.718 "dma_device_id": "system", 00:09:38.718 "dma_device_type": 1 00:09:38.718 }, 00:09:38.718 { 00:09:38.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.718 "dma_device_type": 2 00:09:38.718 } 00:09:38.718 ], 00:09:38.718 "driver_specific": {} 00:09:38.718 } 00:09:38.718 ] 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.718 "name": "Existed_Raid", 00:09:38.718 "uuid": "ae2b2b95-b610-458c-a38f-4f06820a6f53", 00:09:38.718 "strip_size_kb": 0, 00:09:38.718 "state": "online", 00:09:38.718 "raid_level": "raid1", 00:09:38.718 "superblock": true, 00:09:38.718 "num_base_bdevs": 2, 00:09:38.718 "num_base_bdevs_discovered": 2, 00:09:38.718 "num_base_bdevs_operational": 2, 00:09:38.718 "base_bdevs_list": [ 00:09:38.718 { 00:09:38.718 "name": "BaseBdev1", 00:09:38.718 "uuid": "40ec8cb6-b26d-405c-a33b-c5c7860f5501", 00:09:38.718 "is_configured": true, 00:09:38.718 "data_offset": 2048, 00:09:38.718 "data_size": 63488 00:09:38.718 }, 00:09:38.718 { 00:09:38.718 "name": "BaseBdev2", 00:09:38.718 "uuid": "6ef0e5f7-772f-4aea-9e83-a34555d1b0d5", 00:09:38.718 "is_configured": true, 00:09:38.718 "data_offset": 2048, 00:09:38.718 "data_size": 63488 00:09:38.718 } 00:09:38.718 ] 00:09:38.718 }' 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.718 09:46:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.978 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:38.978 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:38.978 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.978 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.979 [2024-11-27 09:46:40.087104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.979 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:39.239 "name": "Existed_Raid", 00:09:39.239 "aliases": [ 00:09:39.239 "ae2b2b95-b610-458c-a38f-4f06820a6f53" 00:09:39.239 ], 00:09:39.239 "product_name": "Raid Volume", 00:09:39.239 "block_size": 512, 00:09:39.239 "num_blocks": 63488, 00:09:39.239 "uuid": "ae2b2b95-b610-458c-a38f-4f06820a6f53", 00:09:39.239 "assigned_rate_limits": { 00:09:39.239 "rw_ios_per_sec": 0, 00:09:39.239 "rw_mbytes_per_sec": 0, 00:09:39.239 "r_mbytes_per_sec": 0, 00:09:39.239 "w_mbytes_per_sec": 0 00:09:39.239 }, 00:09:39.239 "claimed": false, 00:09:39.239 "zoned": false, 00:09:39.239 "supported_io_types": { 00:09:39.239 "read": true, 00:09:39.239 "write": true, 00:09:39.239 "unmap": false, 00:09:39.239 "flush": false, 00:09:39.239 "reset": true, 00:09:39.239 "nvme_admin": false, 00:09:39.239 "nvme_io": false, 00:09:39.239 "nvme_io_md": false, 00:09:39.239 "write_zeroes": true, 00:09:39.239 "zcopy": false, 00:09:39.239 "get_zone_info": false, 00:09:39.239 "zone_management": false, 00:09:39.239 "zone_append": false, 00:09:39.239 "compare": false, 00:09:39.239 "compare_and_write": false, 00:09:39.239 "abort": false, 00:09:39.239 "seek_hole": false, 00:09:39.239 "seek_data": false, 00:09:39.239 "copy": false, 00:09:39.239 "nvme_iov_md": false 00:09:39.239 }, 00:09:39.239 "memory_domains": [ 00:09:39.239 { 00:09:39.239 "dma_device_id": "system", 00:09:39.239 "dma_device_type": 1 00:09:39.239 }, 00:09:39.239 { 00:09:39.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.239 "dma_device_type": 2 00:09:39.239 }, 00:09:39.239 { 00:09:39.239 "dma_device_id": "system", 00:09:39.239 "dma_device_type": 1 00:09:39.239 }, 00:09:39.239 { 00:09:39.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.239 "dma_device_type": 2 00:09:39.239 } 00:09:39.239 ], 00:09:39.239 "driver_specific": { 00:09:39.239 "raid": { 00:09:39.239 "uuid": "ae2b2b95-b610-458c-a38f-4f06820a6f53", 00:09:39.239 "strip_size_kb": 0, 00:09:39.239 "state": "online", 00:09:39.239 "raid_level": "raid1", 00:09:39.239 "superblock": true, 00:09:39.239 "num_base_bdevs": 2, 00:09:39.239 "num_base_bdevs_discovered": 2, 00:09:39.239 "num_base_bdevs_operational": 2, 00:09:39.239 "base_bdevs_list": [ 00:09:39.239 { 00:09:39.239 "name": "BaseBdev1", 00:09:39.239 "uuid": "40ec8cb6-b26d-405c-a33b-c5c7860f5501", 00:09:39.239 "is_configured": true, 00:09:39.239 "data_offset": 2048, 00:09:39.239 "data_size": 63488 00:09:39.239 }, 00:09:39.239 { 00:09:39.239 "name": "BaseBdev2", 00:09:39.239 "uuid": "6ef0e5f7-772f-4aea-9e83-a34555d1b0d5", 00:09:39.239 "is_configured": true, 00:09:39.239 "data_offset": 2048, 00:09:39.239 "data_size": 63488 00:09:39.239 } 00:09:39.239 ] 00:09:39.239 } 00:09:39.239 } 00:09:39.239 }' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:39.239 BaseBdev2' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.239 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.239 [2024-11-27 09:46:40.302481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.500 "name": "Existed_Raid", 00:09:39.500 "uuid": "ae2b2b95-b610-458c-a38f-4f06820a6f53", 00:09:39.500 "strip_size_kb": 0, 00:09:39.500 "state": "online", 00:09:39.500 "raid_level": "raid1", 00:09:39.500 "superblock": true, 00:09:39.500 "num_base_bdevs": 2, 00:09:39.500 "num_base_bdevs_discovered": 1, 00:09:39.500 "num_base_bdevs_operational": 1, 00:09:39.500 "base_bdevs_list": [ 00:09:39.500 { 00:09:39.500 "name": null, 00:09:39.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.500 "is_configured": false, 00:09:39.500 "data_offset": 0, 00:09:39.500 "data_size": 63488 00:09:39.500 }, 00:09:39.500 { 00:09:39.500 "name": "BaseBdev2", 00:09:39.500 "uuid": "6ef0e5f7-772f-4aea-9e83-a34555d1b0d5", 00:09:39.500 "is_configured": true, 00:09:39.500 "data_offset": 2048, 00:09:39.500 "data_size": 63488 00:09:39.500 } 00:09:39.500 ] 00:09:39.500 }' 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.500 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.760 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:39.760 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:39.760 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:39.760 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.760 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.760 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.020 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.020 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:40.020 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:40.020 09:46:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:40.020 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.020 09:46:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.020 [2024-11-27 09:46:40.928322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:40.020 [2024-11-27 09:46:40.928541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.020 [2024-11-27 09:46:41.036731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.020 [2024-11-27 09:46:41.036938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.020 [2024-11-27 09:46:41.036961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63207 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63207 ']' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63207 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63207 00:09:40.020 killing process with pid 63207 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63207' 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63207 00:09:40.020 [2024-11-27 09:46:41.133594] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.020 09:46:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63207 00:09:40.280 [2024-11-27 09:46:41.152892] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.664 09:46:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.664 00:09:41.664 real 0m5.194s 00:09:41.664 user 0m7.250s 00:09:41.664 sys 0m0.953s 00:09:41.664 09:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.664 09:46:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.664 ************************************ 00:09:41.664 END TEST raid_state_function_test_sb 00:09:41.664 ************************************ 00:09:41.664 09:46:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:41.664 09:46:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.664 09:46:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.664 09:46:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.664 ************************************ 00:09:41.664 START TEST raid_superblock_test 00:09:41.664 ************************************ 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63459 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63459 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63459 ']' 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.664 09:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.664 [2024-11-27 09:46:42.578931] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:41.664 [2024-11-27 09:46:42.579201] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63459 ] 00:09:41.664 [2024-11-27 09:46:42.763398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.927 [2024-11-27 09:46:42.905581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.205 [2024-11-27 09:46:43.143920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.205 [2024-11-27 09:46:43.144016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.466 malloc1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.466 [2024-11-27 09:46:43.506195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.466 [2024-11-27 09:46:43.506290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.466 [2024-11-27 09:46:43.506319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:42.466 [2024-11-27 09:46:43.506330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.466 [2024-11-27 09:46:43.509108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.466 [2024-11-27 09:46:43.509213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.466 pt1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.466 malloc2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.466 [2024-11-27 09:46:43.568931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.466 [2024-11-27 09:46:43.569088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.466 [2024-11-27 09:46:43.569129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:42.466 [2024-11-27 09:46:43.569140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.466 [2024-11-27 09:46:43.571823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.466 [2024-11-27 09:46:43.571868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.466 pt2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.466 [2024-11-27 09:46:43.580983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.466 [2024-11-27 09:46:43.583286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.466 [2024-11-27 09:46:43.583493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:42.466 [2024-11-27 09:46:43.583513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.466 [2024-11-27 09:46:43.583853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:42.466 [2024-11-27 09:46:43.584070] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:42.466 [2024-11-27 09:46:43.584090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:42.466 [2024-11-27 09:46:43.584287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.466 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.467 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.727 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.727 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.727 "name": "raid_bdev1", 00:09:42.727 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:42.727 "strip_size_kb": 0, 00:09:42.727 "state": "online", 00:09:42.727 "raid_level": "raid1", 00:09:42.727 "superblock": true, 00:09:42.727 "num_base_bdevs": 2, 00:09:42.727 "num_base_bdevs_discovered": 2, 00:09:42.727 "num_base_bdevs_operational": 2, 00:09:42.727 "base_bdevs_list": [ 00:09:42.727 { 00:09:42.727 "name": "pt1", 00:09:42.727 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.727 "is_configured": true, 00:09:42.727 "data_offset": 2048, 00:09:42.727 "data_size": 63488 00:09:42.727 }, 00:09:42.727 { 00:09:42.727 "name": "pt2", 00:09:42.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.727 "is_configured": true, 00:09:42.727 "data_offset": 2048, 00:09:42.727 "data_size": 63488 00:09:42.727 } 00:09:42.727 ] 00:09:42.727 }' 00:09:42.727 09:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.727 09:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.987 [2024-11-27 09:46:44.068490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.987 "name": "raid_bdev1", 00:09:42.987 "aliases": [ 00:09:42.987 "de1e896f-4682-4252-b58f-65b1fa50d8e3" 00:09:42.987 ], 00:09:42.987 "product_name": "Raid Volume", 00:09:42.987 "block_size": 512, 00:09:42.987 "num_blocks": 63488, 00:09:42.987 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:42.987 "assigned_rate_limits": { 00:09:42.987 "rw_ios_per_sec": 0, 00:09:42.987 "rw_mbytes_per_sec": 0, 00:09:42.987 "r_mbytes_per_sec": 0, 00:09:42.987 "w_mbytes_per_sec": 0 00:09:42.987 }, 00:09:42.987 "claimed": false, 00:09:42.987 "zoned": false, 00:09:42.987 "supported_io_types": { 00:09:42.987 "read": true, 00:09:42.987 "write": true, 00:09:42.987 "unmap": false, 00:09:42.987 "flush": false, 00:09:42.987 "reset": true, 00:09:42.987 "nvme_admin": false, 00:09:42.987 "nvme_io": false, 00:09:42.987 "nvme_io_md": false, 00:09:42.987 "write_zeroes": true, 00:09:42.987 "zcopy": false, 00:09:42.987 "get_zone_info": false, 00:09:42.987 "zone_management": false, 00:09:42.987 "zone_append": false, 00:09:42.987 "compare": false, 00:09:42.987 "compare_and_write": false, 00:09:42.987 "abort": false, 00:09:42.987 "seek_hole": false, 00:09:42.987 "seek_data": false, 00:09:42.987 "copy": false, 00:09:42.987 "nvme_iov_md": false 00:09:42.987 }, 00:09:42.987 "memory_domains": [ 00:09:42.987 { 00:09:42.987 "dma_device_id": "system", 00:09:42.987 "dma_device_type": 1 00:09:42.987 }, 00:09:42.987 { 00:09:42.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.987 "dma_device_type": 2 00:09:42.987 }, 00:09:42.987 { 00:09:42.987 "dma_device_id": "system", 00:09:42.987 "dma_device_type": 1 00:09:42.987 }, 00:09:42.987 { 00:09:42.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.987 "dma_device_type": 2 00:09:42.987 } 00:09:42.987 ], 00:09:42.987 "driver_specific": { 00:09:42.987 "raid": { 00:09:42.987 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:42.987 "strip_size_kb": 0, 00:09:42.987 "state": "online", 00:09:42.987 "raid_level": "raid1", 00:09:42.987 "superblock": true, 00:09:42.987 "num_base_bdevs": 2, 00:09:42.987 "num_base_bdevs_discovered": 2, 00:09:42.987 "num_base_bdevs_operational": 2, 00:09:42.987 "base_bdevs_list": [ 00:09:42.987 { 00:09:42.987 "name": "pt1", 00:09:42.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.987 "is_configured": true, 00:09:42.987 "data_offset": 2048, 00:09:42.987 "data_size": 63488 00:09:42.987 }, 00:09:42.987 { 00:09:42.987 "name": "pt2", 00:09:42.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.987 "is_configured": true, 00:09:42.987 "data_offset": 2048, 00:09:42.987 "data_size": 63488 00:09:42.987 } 00:09:42.987 ] 00:09:42.987 } 00:09:42.987 } 00:09:42.987 }' 00:09:42.987 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:43.247 pt2' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.247 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.248 [2024-11-27 09:46:44.316144] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de1e896f-4682-4252-b58f-65b1fa50d8e3 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de1e896f-4682-4252-b58f-65b1fa50d8e3 ']' 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.248 [2024-11-27 09:46:44.363668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.248 [2024-11-27 09:46:44.363708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.248 [2024-11-27 09:46:44.363833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.248 [2024-11-27 09:46:44.363910] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.248 [2024-11-27 09:46:44.363926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.248 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.508 [2024-11-27 09:46:44.519458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:43.508 [2024-11-27 09:46:44.522048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:43.508 [2024-11-27 09:46:44.522184] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:43.508 [2024-11-27 09:46:44.522320] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:43.508 [2024-11-27 09:46:44.522396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.508 [2024-11-27 09:46:44.522433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:43.508 request: 00:09:43.508 { 00:09:43.508 "name": "raid_bdev1", 00:09:43.508 "raid_level": "raid1", 00:09:43.508 "base_bdevs": [ 00:09:43.508 "malloc1", 00:09:43.508 "malloc2" 00:09:43.508 ], 00:09:43.508 "superblock": false, 00:09:43.508 "method": "bdev_raid_create", 00:09:43.508 "req_id": 1 00:09:43.508 } 00:09:43.508 Got JSON-RPC error response 00:09:43.508 response: 00:09:43.508 { 00:09:43.508 "code": -17, 00:09:43.508 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:43.508 } 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.508 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.509 [2024-11-27 09:46:44.563341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.509 [2024-11-27 09:46:44.563488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.509 [2024-11-27 09:46:44.563531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:43.509 [2024-11-27 09:46:44.563592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.509 [2024-11-27 09:46:44.566500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.509 [2024-11-27 09:46:44.566591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.509 [2024-11-27 09:46:44.566750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.509 [2024-11-27 09:46:44.566853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.509 pt1 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.509 "name": "raid_bdev1", 00:09:43.509 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:43.509 "strip_size_kb": 0, 00:09:43.509 "state": "configuring", 00:09:43.509 "raid_level": "raid1", 00:09:43.509 "superblock": true, 00:09:43.509 "num_base_bdevs": 2, 00:09:43.509 "num_base_bdevs_discovered": 1, 00:09:43.509 "num_base_bdevs_operational": 2, 00:09:43.509 "base_bdevs_list": [ 00:09:43.509 { 00:09:43.509 "name": "pt1", 00:09:43.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.509 "is_configured": true, 00:09:43.509 "data_offset": 2048, 00:09:43.509 "data_size": 63488 00:09:43.509 }, 00:09:43.509 { 00:09:43.509 "name": null, 00:09:43.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.509 "is_configured": false, 00:09:43.509 "data_offset": 2048, 00:09:43.509 "data_size": 63488 00:09:43.509 } 00:09:43.509 ] 00:09:43.509 }' 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.509 09:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.079 [2024-11-27 09:46:45.018577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.079 [2024-11-27 09:46:45.018761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.079 [2024-11-27 09:46:45.018808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:44.079 [2024-11-27 09:46:45.018864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.079 [2024-11-27 09:46:45.019506] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.079 [2024-11-27 09:46:45.019584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.079 [2024-11-27 09:46:45.019726] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.079 [2024-11-27 09:46:45.019793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.079 [2024-11-27 09:46:45.019974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:44.079 [2024-11-27 09:46:45.020051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.079 [2024-11-27 09:46:45.020390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:44.079 [2024-11-27 09:46:45.020610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:44.079 [2024-11-27 09:46:45.020654] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:44.079 [2024-11-27 09:46:45.020879] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.079 pt2 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.079 "name": "raid_bdev1", 00:09:44.079 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:44.079 "strip_size_kb": 0, 00:09:44.079 "state": "online", 00:09:44.079 "raid_level": "raid1", 00:09:44.079 "superblock": true, 00:09:44.079 "num_base_bdevs": 2, 00:09:44.079 "num_base_bdevs_discovered": 2, 00:09:44.079 "num_base_bdevs_operational": 2, 00:09:44.079 "base_bdevs_list": [ 00:09:44.079 { 00:09:44.079 "name": "pt1", 00:09:44.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.079 "is_configured": true, 00:09:44.079 "data_offset": 2048, 00:09:44.079 "data_size": 63488 00:09:44.079 }, 00:09:44.079 { 00:09:44.079 "name": "pt2", 00:09:44.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.079 "is_configured": true, 00:09:44.079 "data_offset": 2048, 00:09:44.079 "data_size": 63488 00:09:44.079 } 00:09:44.079 ] 00:09:44.079 }' 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.079 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:44.652 [2024-11-27 09:46:45.525977] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:44.652 "name": "raid_bdev1", 00:09:44.652 "aliases": [ 00:09:44.652 "de1e896f-4682-4252-b58f-65b1fa50d8e3" 00:09:44.652 ], 00:09:44.652 "product_name": "Raid Volume", 00:09:44.652 "block_size": 512, 00:09:44.652 "num_blocks": 63488, 00:09:44.652 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:44.652 "assigned_rate_limits": { 00:09:44.652 "rw_ios_per_sec": 0, 00:09:44.652 "rw_mbytes_per_sec": 0, 00:09:44.652 "r_mbytes_per_sec": 0, 00:09:44.652 "w_mbytes_per_sec": 0 00:09:44.652 }, 00:09:44.652 "claimed": false, 00:09:44.652 "zoned": false, 00:09:44.652 "supported_io_types": { 00:09:44.652 "read": true, 00:09:44.652 "write": true, 00:09:44.652 "unmap": false, 00:09:44.652 "flush": false, 00:09:44.652 "reset": true, 00:09:44.652 "nvme_admin": false, 00:09:44.652 "nvme_io": false, 00:09:44.652 "nvme_io_md": false, 00:09:44.652 "write_zeroes": true, 00:09:44.652 "zcopy": false, 00:09:44.652 "get_zone_info": false, 00:09:44.652 "zone_management": false, 00:09:44.652 "zone_append": false, 00:09:44.652 "compare": false, 00:09:44.652 "compare_and_write": false, 00:09:44.652 "abort": false, 00:09:44.652 "seek_hole": false, 00:09:44.652 "seek_data": false, 00:09:44.652 "copy": false, 00:09:44.652 "nvme_iov_md": false 00:09:44.652 }, 00:09:44.652 "memory_domains": [ 00:09:44.652 { 00:09:44.652 "dma_device_id": "system", 00:09:44.652 "dma_device_type": 1 00:09:44.652 }, 00:09:44.652 { 00:09:44.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.652 "dma_device_type": 2 00:09:44.652 }, 00:09:44.652 { 00:09:44.652 "dma_device_id": "system", 00:09:44.652 "dma_device_type": 1 00:09:44.652 }, 00:09:44.652 { 00:09:44.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.652 "dma_device_type": 2 00:09:44.652 } 00:09:44.652 ], 00:09:44.652 "driver_specific": { 00:09:44.652 "raid": { 00:09:44.652 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:44.652 "strip_size_kb": 0, 00:09:44.652 "state": "online", 00:09:44.652 "raid_level": "raid1", 00:09:44.652 "superblock": true, 00:09:44.652 "num_base_bdevs": 2, 00:09:44.652 "num_base_bdevs_discovered": 2, 00:09:44.652 "num_base_bdevs_operational": 2, 00:09:44.652 "base_bdevs_list": [ 00:09:44.652 { 00:09:44.652 "name": "pt1", 00:09:44.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.652 "is_configured": true, 00:09:44.652 "data_offset": 2048, 00:09:44.652 "data_size": 63488 00:09:44.652 }, 00:09:44.652 { 00:09:44.652 "name": "pt2", 00:09:44.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.652 "is_configured": true, 00:09:44.652 "data_offset": 2048, 00:09:44.652 "data_size": 63488 00:09:44.652 } 00:09:44.652 ] 00:09:44.652 } 00:09:44.652 } 00:09:44.652 }' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:44.652 pt2' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.652 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.652 [2024-11-27 09:46:45.777608] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de1e896f-4682-4252-b58f-65b1fa50d8e3 '!=' de1e896f-4682-4252-b58f-65b1fa50d8e3 ']' 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.913 [2024-11-27 09:46:45.825311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.913 "name": "raid_bdev1", 00:09:44.913 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:44.913 "strip_size_kb": 0, 00:09:44.913 "state": "online", 00:09:44.913 "raid_level": "raid1", 00:09:44.913 "superblock": true, 00:09:44.913 "num_base_bdevs": 2, 00:09:44.913 "num_base_bdevs_discovered": 1, 00:09:44.913 "num_base_bdevs_operational": 1, 00:09:44.913 "base_bdevs_list": [ 00:09:44.913 { 00:09:44.913 "name": null, 00:09:44.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.913 "is_configured": false, 00:09:44.913 "data_offset": 0, 00:09:44.913 "data_size": 63488 00:09:44.913 }, 00:09:44.913 { 00:09:44.913 "name": "pt2", 00:09:44.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.913 "is_configured": true, 00:09:44.913 "data_offset": 2048, 00:09:44.913 "data_size": 63488 00:09:44.913 } 00:09:44.913 ] 00:09:44.913 }' 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.913 09:46:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.174 [2024-11-27 09:46:46.272471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.174 [2024-11-27 09:46:46.272581] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.174 [2024-11-27 09:46:46.272731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.174 [2024-11-27 09:46:46.272825] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.174 [2024-11-27 09:46:46.272882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.174 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 [2024-11-27 09:46:46.332380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:45.435 [2024-11-27 09:46:46.332473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.435 [2024-11-27 09:46:46.332493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:45.435 [2024-11-27 09:46:46.332506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.435 [2024-11-27 09:46:46.335416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.435 [2024-11-27 09:46:46.335466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:45.435 [2024-11-27 09:46:46.335580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:45.435 [2024-11-27 09:46:46.335639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.435 [2024-11-27 09:46:46.335763] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:45.435 [2024-11-27 09:46:46.335778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.435 [2024-11-27 09:46:46.336095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:45.435 [2024-11-27 09:46:46.336300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:45.435 [2024-11-27 09:46:46.336311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:45.435 [2024-11-27 09:46:46.336556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.435 pt2 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.435 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.435 "name": "raid_bdev1", 00:09:45.435 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:45.435 "strip_size_kb": 0, 00:09:45.435 "state": "online", 00:09:45.435 "raid_level": "raid1", 00:09:45.435 "superblock": true, 00:09:45.435 "num_base_bdevs": 2, 00:09:45.435 "num_base_bdevs_discovered": 1, 00:09:45.435 "num_base_bdevs_operational": 1, 00:09:45.435 "base_bdevs_list": [ 00:09:45.435 { 00:09:45.435 "name": null, 00:09:45.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.435 "is_configured": false, 00:09:45.435 "data_offset": 2048, 00:09:45.435 "data_size": 63488 00:09:45.435 }, 00:09:45.435 { 00:09:45.435 "name": "pt2", 00:09:45.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.435 "is_configured": true, 00:09:45.436 "data_offset": 2048, 00:09:45.436 "data_size": 63488 00:09:45.436 } 00:09:45.436 ] 00:09:45.436 }' 00:09:45.436 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.436 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.697 [2024-11-27 09:46:46.792152] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.697 [2024-11-27 09:46:46.792256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.697 [2024-11-27 09:46:46.792406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.697 [2024-11-27 09:46:46.792497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.697 [2024-11-27 09:46:46.792568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:45.697 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.958 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:45.958 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:45.958 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:45.958 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.958 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.958 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.958 [2024-11-27 09:46:46.856203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.958 [2024-11-27 09:46:46.856376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.958 [2024-11-27 09:46:46.856422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:45.958 [2024-11-27 09:46:46.856469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.958 [2024-11-27 09:46:46.859326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.959 [2024-11-27 09:46:46.859417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.959 [2024-11-27 09:46:46.859570] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:45.959 [2024-11-27 09:46:46.859648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:45.959 [2024-11-27 09:46:46.859896] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:45.959 [2024-11-27 09:46:46.859959] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.959 [2024-11-27 09:46:46.860023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:45.959 [2024-11-27 09:46:46.860129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:45.959 [2024-11-27 09:46:46.860256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:45.959 [2024-11-27 09:46:46.860298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.959 [2024-11-27 09:46:46.860631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:45.959 [2024-11-27 09:46:46.860818] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:45.959 [2024-11-27 09:46:46.860834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:45.959 [2024-11-27 09:46:46.861117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.959 pt1 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.959 "name": "raid_bdev1", 00:09:45.959 "uuid": "de1e896f-4682-4252-b58f-65b1fa50d8e3", 00:09:45.959 "strip_size_kb": 0, 00:09:45.959 "state": "online", 00:09:45.959 "raid_level": "raid1", 00:09:45.959 "superblock": true, 00:09:45.959 "num_base_bdevs": 2, 00:09:45.959 "num_base_bdevs_discovered": 1, 00:09:45.959 "num_base_bdevs_operational": 1, 00:09:45.959 "base_bdevs_list": [ 00:09:45.959 { 00:09:45.959 "name": null, 00:09:45.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.959 "is_configured": false, 00:09:45.959 "data_offset": 2048, 00:09:45.959 "data_size": 63488 00:09:45.959 }, 00:09:45.959 { 00:09:45.959 "name": "pt2", 00:09:45.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.959 "is_configured": true, 00:09:45.959 "data_offset": 2048, 00:09:45.959 "data_size": 63488 00:09:45.959 } 00:09:45.959 ] 00:09:45.959 }' 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.959 09:46:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.219 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:46.219 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:46.219 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.219 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:46.480 [2024-11-27 09:46:47.408470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' de1e896f-4682-4252-b58f-65b1fa50d8e3 '!=' de1e896f-4682-4252-b58f-65b1fa50d8e3 ']' 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63459 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63459 ']' 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63459 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63459 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.480 killing process with pid 63459 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63459' 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63459 00:09:46.480 [2024-11-27 09:46:47.494706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.480 [2024-11-27 09:46:47.494838] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.480 09:46:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63459 00:09:46.481 [2024-11-27 09:46:47.494902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:46.481 [2024-11-27 09:46:47.494920] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:46.741 [2024-11-27 09:46:47.731008] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.126 09:46:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:48.126 00:09:48.126 real 0m6.519s 00:09:48.126 user 0m9.703s 00:09:48.126 sys 0m1.237s 00:09:48.126 09:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.126 09:46:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.126 ************************************ 00:09:48.126 END TEST raid_superblock_test 00:09:48.126 ************************************ 00:09:48.126 09:46:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:48.126 09:46:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.126 09:46:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.126 09:46:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.126 ************************************ 00:09:48.126 START TEST raid_read_error_test 00:09:48.126 ************************************ 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wGDK34RhFL 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63795 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63795 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63795 ']' 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.126 09:46:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.126 [2024-11-27 09:46:49.184727] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:48.126 [2024-11-27 09:46:49.184975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63795 ] 00:09:48.386 [2024-11-27 09:46:49.367139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.386 [2024-11-27 09:46:49.511068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.646 [2024-11-27 09:46:49.748213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.646 [2024-11-27 09:46:49.748262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 BaseBdev1_malloc 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 true 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 [2024-11-27 09:46:50.110697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.217 [2024-11-27 09:46:50.110759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.217 [2024-11-27 09:46:50.110782] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.217 [2024-11-27 09:46:50.110794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.217 [2024-11-27 09:46:50.113216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.217 [2024-11-27 09:46:50.113256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.217 BaseBdev1 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 BaseBdev2_malloc 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 true 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 [2024-11-27 09:46:50.185330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.217 [2024-11-27 09:46:50.185392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.217 [2024-11-27 09:46:50.185411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.217 [2024-11-27 09:46:50.185422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.217 [2024-11-27 09:46:50.187879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.217 [2024-11-27 09:46:50.187921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.217 BaseBdev2 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.217 [2024-11-27 09:46:50.197391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.217 [2024-11-27 09:46:50.199581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.217 [2024-11-27 09:46:50.199921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.217 [2024-11-27 09:46:50.199948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.217 [2024-11-27 09:46:50.200285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:49.217 [2024-11-27 09:46:50.200518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.217 [2024-11-27 09:46:50.200531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.217 [2024-11-27 09:46:50.200707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.217 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.218 "name": "raid_bdev1", 00:09:49.218 "uuid": "465b7833-e558-4a25-86ee-8307a756acf6", 00:09:49.218 "strip_size_kb": 0, 00:09:49.218 "state": "online", 00:09:49.218 "raid_level": "raid1", 00:09:49.218 "superblock": true, 00:09:49.218 "num_base_bdevs": 2, 00:09:49.218 "num_base_bdevs_discovered": 2, 00:09:49.218 "num_base_bdevs_operational": 2, 00:09:49.218 "base_bdevs_list": [ 00:09:49.218 { 00:09:49.218 "name": "BaseBdev1", 00:09:49.218 "uuid": "6fe1491e-11ce-526d-8d8c-046ab77bc0ea", 00:09:49.218 "is_configured": true, 00:09:49.218 "data_offset": 2048, 00:09:49.218 "data_size": 63488 00:09:49.218 }, 00:09:49.218 { 00:09:49.218 "name": "BaseBdev2", 00:09:49.218 "uuid": "301f0c0a-1bdb-5eb1-9187-737a7e0ec732", 00:09:49.218 "is_configured": true, 00:09:49.218 "data_offset": 2048, 00:09:49.218 "data_size": 63488 00:09:49.218 } 00:09:49.218 ] 00:09:49.218 }' 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.218 09:46:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.787 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:49.787 09:46:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:49.787 [2024-11-27 09:46:50.721900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.727 "name": "raid_bdev1", 00:09:50.727 "uuid": "465b7833-e558-4a25-86ee-8307a756acf6", 00:09:50.727 "strip_size_kb": 0, 00:09:50.727 "state": "online", 00:09:50.727 "raid_level": "raid1", 00:09:50.727 "superblock": true, 00:09:50.727 "num_base_bdevs": 2, 00:09:50.727 "num_base_bdevs_discovered": 2, 00:09:50.727 "num_base_bdevs_operational": 2, 00:09:50.727 "base_bdevs_list": [ 00:09:50.727 { 00:09:50.727 "name": "BaseBdev1", 00:09:50.727 "uuid": "6fe1491e-11ce-526d-8d8c-046ab77bc0ea", 00:09:50.727 "is_configured": true, 00:09:50.727 "data_offset": 2048, 00:09:50.727 "data_size": 63488 00:09:50.727 }, 00:09:50.727 { 00:09:50.727 "name": "BaseBdev2", 00:09:50.727 "uuid": "301f0c0a-1bdb-5eb1-9187-737a7e0ec732", 00:09:50.727 "is_configured": true, 00:09:50.727 "data_offset": 2048, 00:09:50.727 "data_size": 63488 00:09:50.727 } 00:09:50.727 ] 00:09:50.727 }' 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.727 09:46:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.297 [2024-11-27 09:46:52.132690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.297 [2024-11-27 09:46:52.132820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.297 [2024-11-27 09:46:52.135988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.297 [2024-11-27 09:46:52.136110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.297 [2024-11-27 09:46:52.136238] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.297 [2024-11-27 09:46:52.136255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:51.297 { 00:09:51.297 "results": [ 00:09:51.297 { 00:09:51.297 "job": "raid_bdev1", 00:09:51.297 "core_mask": "0x1", 00:09:51.297 "workload": "randrw", 00:09:51.297 "percentage": 50, 00:09:51.297 "status": "finished", 00:09:51.297 "queue_depth": 1, 00:09:51.297 "io_size": 131072, 00:09:51.297 "runtime": 1.411641, 00:09:51.297 "iops": 13566.480429514302, 00:09:51.297 "mibps": 1695.8100536892878, 00:09:51.297 "io_failed": 0, 00:09:51.297 "io_timeout": 0, 00:09:51.297 "avg_latency_us": 71.00317691233016, 00:09:51.297 "min_latency_us": 23.699563318777294, 00:09:51.297 "max_latency_us": 1366.5257641921398 00:09:51.297 } 00:09:51.297 ], 00:09:51.297 "core_count": 1 00:09:51.297 } 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63795 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63795 ']' 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63795 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63795 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.297 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63795' 00:09:51.297 killing process with pid 63795 00:09:51.298 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63795 00:09:51.298 [2024-11-27 09:46:52.182509] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.298 09:46:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63795 00:09:51.298 [2024-11-27 09:46:52.334655] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wGDK34RhFL 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:52.712 00:09:52.712 real 0m4.571s 00:09:52.712 user 0m5.337s 00:09:52.712 sys 0m0.670s 00:09:52.712 ************************************ 00:09:52.712 END TEST raid_read_error_test 00:09:52.712 ************************************ 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.712 09:46:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 09:46:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:52.712 09:46:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.712 09:46:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.712 09:46:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 ************************************ 00:09:52.712 START TEST raid_write_error_test 00:09:52.712 ************************************ 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1M0U7BWsdF 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63939 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63939 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63939 ']' 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.712 09:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.712 [2024-11-27 09:46:53.827946] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:52.712 [2024-11-27 09:46:53.828127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63939 ] 00:09:52.971 [2024-11-27 09:46:53.992793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.230 [2024-11-27 09:46:54.129223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.490 [2024-11-27 09:46:54.370942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.490 [2024-11-27 09:46:54.370994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.750 BaseBdev1_malloc 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.750 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 true 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 [2024-11-27 09:46:54.741669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.751 [2024-11-27 09:46:54.741778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.751 [2024-11-27 09:46:54.741819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.751 [2024-11-27 09:46:54.741850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.751 [2024-11-27 09:46:54.744343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.751 [2024-11-27 09:46:54.744426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.751 BaseBdev1 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 BaseBdev2_malloc 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 true 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 [2024-11-27 09:46:54.815326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.751 [2024-11-27 09:46:54.815389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.751 [2024-11-27 09:46:54.815409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.751 [2024-11-27 09:46:54.815420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.751 [2024-11-27 09:46:54.817897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.751 [2024-11-27 09:46:54.817940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.751 BaseBdev2 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 [2024-11-27 09:46:54.827385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.751 [2024-11-27 09:46:54.829693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.751 [2024-11-27 09:46:54.829980] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:53.751 [2024-11-27 09:46:54.830049] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:53.751 [2024-11-27 09:46:54.830363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:53.751 [2024-11-27 09:46:54.830604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:53.751 [2024-11-27 09:46:54.830649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:53.751 [2024-11-27 09:46:54.830864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.751 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.011 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.011 "name": "raid_bdev1", 00:09:54.011 "uuid": "d28c1d3a-a927-4f9f-8b13-aefa948b64d3", 00:09:54.011 "strip_size_kb": 0, 00:09:54.011 "state": "online", 00:09:54.011 "raid_level": "raid1", 00:09:54.011 "superblock": true, 00:09:54.011 "num_base_bdevs": 2, 00:09:54.011 "num_base_bdevs_discovered": 2, 00:09:54.011 "num_base_bdevs_operational": 2, 00:09:54.011 "base_bdevs_list": [ 00:09:54.011 { 00:09:54.011 "name": "BaseBdev1", 00:09:54.011 "uuid": "8f693068-ca6a-51cf-a49e-8317ab936cf3", 00:09:54.011 "is_configured": true, 00:09:54.011 "data_offset": 2048, 00:09:54.011 "data_size": 63488 00:09:54.011 }, 00:09:54.011 { 00:09:54.011 "name": "BaseBdev2", 00:09:54.011 "uuid": "9deaec91-b682-583e-b1ff-f1b81fe0ac06", 00:09:54.011 "is_configured": true, 00:09:54.011 "data_offset": 2048, 00:09:54.011 "data_size": 63488 00:09:54.011 } 00:09:54.011 ] 00:09:54.011 }' 00:09:54.011 09:46:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.011 09:46:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.270 09:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:54.270 09:46:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:54.530 [2024-11-27 09:46:55.403850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.470 [2024-11-27 09:46:56.319111] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:55.470 [2024-11-27 09:46:56.319255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:55.470 [2024-11-27 09:46:56.319498] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.470 "name": "raid_bdev1", 00:09:55.470 "uuid": "d28c1d3a-a927-4f9f-8b13-aefa948b64d3", 00:09:55.470 "strip_size_kb": 0, 00:09:55.470 "state": "online", 00:09:55.470 "raid_level": "raid1", 00:09:55.470 "superblock": true, 00:09:55.470 "num_base_bdevs": 2, 00:09:55.470 "num_base_bdevs_discovered": 1, 00:09:55.470 "num_base_bdevs_operational": 1, 00:09:55.470 "base_bdevs_list": [ 00:09:55.470 { 00:09:55.470 "name": null, 00:09:55.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.470 "is_configured": false, 00:09:55.470 "data_offset": 0, 00:09:55.470 "data_size": 63488 00:09:55.470 }, 00:09:55.470 { 00:09:55.470 "name": "BaseBdev2", 00:09:55.470 "uuid": "9deaec91-b682-583e-b1ff-f1b81fe0ac06", 00:09:55.470 "is_configured": true, 00:09:55.470 "data_offset": 2048, 00:09:55.470 "data_size": 63488 00:09:55.470 } 00:09:55.470 ] 00:09:55.470 }' 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.470 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.730 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:55.730 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.730 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.730 [2024-11-27 09:46:56.796810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:55.730 [2024-11-27 09:46:56.796928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.730 [2024-11-27 09:46:56.799707] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.730 [2024-11-27 09:46:56.799818] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.731 [2024-11-27 09:46:56.799908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.731 [2024-11-27 09:46:56.799971] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:55.731 { 00:09:55.731 "results": [ 00:09:55.731 { 00:09:55.731 "job": "raid_bdev1", 00:09:55.731 "core_mask": "0x1", 00:09:55.731 "workload": "randrw", 00:09:55.731 "percentage": 50, 00:09:55.731 "status": "finished", 00:09:55.731 "queue_depth": 1, 00:09:55.731 "io_size": 131072, 00:09:55.731 "runtime": 1.393594, 00:09:55.731 "iops": 16071.395255720103, 00:09:55.731 "mibps": 2008.9244069650128, 00:09:55.731 "io_failed": 0, 00:09:55.731 "io_timeout": 0, 00:09:55.731 "avg_latency_us": 59.51828685727365, 00:09:55.731 "min_latency_us": 22.358078602620086, 00:09:55.731 "max_latency_us": 1602.6270742358079 00:09:55.731 } 00:09:55.731 ], 00:09:55.731 "core_count": 1 00:09:55.731 } 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63939 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63939 ']' 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63939 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63939 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63939' 00:09:55.731 killing process with pid 63939 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63939 00:09:55.731 [2024-11-27 09:46:56.842014] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.731 09:46:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63939 00:09:55.991 [2024-11-27 09:46:57.001163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1M0U7BWsdF 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:57.372 ************************************ 00:09:57.372 END TEST raid_write_error_test 00:09:57.372 ************************************ 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:57.372 00:09:57.372 real 0m4.625s 00:09:57.372 user 0m5.453s 00:09:57.372 sys 0m0.656s 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.372 09:46:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.372 09:46:58 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:57.372 09:46:58 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:57.372 09:46:58 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:57.372 09:46:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.372 09:46:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.372 09:46:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.372 ************************************ 00:09:57.372 START TEST raid_state_function_test 00:09:57.372 ************************************ 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64084 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64084' 00:09:57.372 Process raid pid: 64084 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64084 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64084 ']' 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.372 09:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.631 [2024-11-27 09:46:58.516818] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:09:57.631 [2024-11-27 09:46:58.517073] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.631 [2024-11-27 09:46:58.697603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.891 [2024-11-27 09:46:58.842227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.149 [2024-11-27 09:46:59.087737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.149 [2024-11-27 09:46:59.087856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.409 [2024-11-27 09:46:59.357835] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.409 [2024-11-27 09:46:59.357959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.409 [2024-11-27 09:46:59.358002] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.409 [2024-11-27 09:46:59.358029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.409 [2024-11-27 09:46:59.358047] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.409 [2024-11-27 09:46:59.358068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.409 "name": "Existed_Raid", 00:09:58.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.409 "strip_size_kb": 64, 00:09:58.409 "state": "configuring", 00:09:58.409 "raid_level": "raid0", 00:09:58.409 "superblock": false, 00:09:58.409 "num_base_bdevs": 3, 00:09:58.409 "num_base_bdevs_discovered": 0, 00:09:58.409 "num_base_bdevs_operational": 3, 00:09:58.409 "base_bdevs_list": [ 00:09:58.409 { 00:09:58.409 "name": "BaseBdev1", 00:09:58.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.409 "is_configured": false, 00:09:58.409 "data_offset": 0, 00:09:58.409 "data_size": 0 00:09:58.409 }, 00:09:58.409 { 00:09:58.409 "name": "BaseBdev2", 00:09:58.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.409 "is_configured": false, 00:09:58.409 "data_offset": 0, 00:09:58.409 "data_size": 0 00:09:58.409 }, 00:09:58.409 { 00:09:58.409 "name": "BaseBdev3", 00:09:58.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.409 "is_configured": false, 00:09:58.409 "data_offset": 0, 00:09:58.409 "data_size": 0 00:09:58.409 } 00:09:58.409 ] 00:09:58.409 }' 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.409 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.669 [2024-11-27 09:46:59.765118] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.669 [2024-11-27 09:46:59.765212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.669 [2024-11-27 09:46:59.773087] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.669 [2024-11-27 09:46:59.773138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.669 [2024-11-27 09:46:59.773164] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.669 [2024-11-27 09:46:59.773175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.669 [2024-11-27 09:46:59.773182] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.669 [2024-11-27 09:46:59.773193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.669 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.929 [2024-11-27 09:46:59.824321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.929 BaseBdev1 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.929 [ 00:09:58.929 { 00:09:58.929 "name": "BaseBdev1", 00:09:58.929 "aliases": [ 00:09:58.929 "0ff2e6b6-980f-478f-b140-e3f130d9bd38" 00:09:58.929 ], 00:09:58.929 "product_name": "Malloc disk", 00:09:58.929 "block_size": 512, 00:09:58.929 "num_blocks": 65536, 00:09:58.929 "uuid": "0ff2e6b6-980f-478f-b140-e3f130d9bd38", 00:09:58.929 "assigned_rate_limits": { 00:09:58.929 "rw_ios_per_sec": 0, 00:09:58.929 "rw_mbytes_per_sec": 0, 00:09:58.929 "r_mbytes_per_sec": 0, 00:09:58.929 "w_mbytes_per_sec": 0 00:09:58.929 }, 00:09:58.929 "claimed": true, 00:09:58.929 "claim_type": "exclusive_write", 00:09:58.929 "zoned": false, 00:09:58.929 "supported_io_types": { 00:09:58.929 "read": true, 00:09:58.929 "write": true, 00:09:58.929 "unmap": true, 00:09:58.929 "flush": true, 00:09:58.929 "reset": true, 00:09:58.929 "nvme_admin": false, 00:09:58.929 "nvme_io": false, 00:09:58.929 "nvme_io_md": false, 00:09:58.929 "write_zeroes": true, 00:09:58.929 "zcopy": true, 00:09:58.929 "get_zone_info": false, 00:09:58.929 "zone_management": false, 00:09:58.929 "zone_append": false, 00:09:58.929 "compare": false, 00:09:58.929 "compare_and_write": false, 00:09:58.929 "abort": true, 00:09:58.929 "seek_hole": false, 00:09:58.929 "seek_data": false, 00:09:58.929 "copy": true, 00:09:58.929 "nvme_iov_md": false 00:09:58.929 }, 00:09:58.929 "memory_domains": [ 00:09:58.929 { 00:09:58.929 "dma_device_id": "system", 00:09:58.929 "dma_device_type": 1 00:09:58.929 }, 00:09:58.929 { 00:09:58.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.929 "dma_device_type": 2 00:09:58.929 } 00:09:58.929 ], 00:09:58.929 "driver_specific": {} 00:09:58.929 } 00:09:58.929 ] 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.929 "name": "Existed_Raid", 00:09:58.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.929 "strip_size_kb": 64, 00:09:58.929 "state": "configuring", 00:09:58.929 "raid_level": "raid0", 00:09:58.929 "superblock": false, 00:09:58.929 "num_base_bdevs": 3, 00:09:58.929 "num_base_bdevs_discovered": 1, 00:09:58.929 "num_base_bdevs_operational": 3, 00:09:58.929 "base_bdevs_list": [ 00:09:58.929 { 00:09:58.929 "name": "BaseBdev1", 00:09:58.929 "uuid": "0ff2e6b6-980f-478f-b140-e3f130d9bd38", 00:09:58.929 "is_configured": true, 00:09:58.929 "data_offset": 0, 00:09:58.929 "data_size": 65536 00:09:58.929 }, 00:09:58.929 { 00:09:58.929 "name": "BaseBdev2", 00:09:58.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.929 "is_configured": false, 00:09:58.929 "data_offset": 0, 00:09:58.929 "data_size": 0 00:09:58.929 }, 00:09:58.929 { 00:09:58.929 "name": "BaseBdev3", 00:09:58.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.929 "is_configured": false, 00:09:58.929 "data_offset": 0, 00:09:58.929 "data_size": 0 00:09:58.929 } 00:09:58.929 ] 00:09:58.929 }' 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.929 09:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.188 [2024-11-27 09:47:00.307667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.188 [2024-11-27 09:47:00.307793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.188 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.450 [2024-11-27 09:47:00.319695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.450 [2024-11-27 09:47:00.322070] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.450 [2024-11-27 09:47:00.322161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.450 [2024-11-27 09:47:00.322194] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.450 [2024-11-27 09:47:00.322218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.450 "name": "Existed_Raid", 00:09:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.450 "strip_size_kb": 64, 00:09:59.450 "state": "configuring", 00:09:59.450 "raid_level": "raid0", 00:09:59.450 "superblock": false, 00:09:59.450 "num_base_bdevs": 3, 00:09:59.450 "num_base_bdevs_discovered": 1, 00:09:59.450 "num_base_bdevs_operational": 3, 00:09:59.450 "base_bdevs_list": [ 00:09:59.450 { 00:09:59.450 "name": "BaseBdev1", 00:09:59.450 "uuid": "0ff2e6b6-980f-478f-b140-e3f130d9bd38", 00:09:59.450 "is_configured": true, 00:09:59.450 "data_offset": 0, 00:09:59.450 "data_size": 65536 00:09:59.450 }, 00:09:59.450 { 00:09:59.450 "name": "BaseBdev2", 00:09:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.450 "is_configured": false, 00:09:59.450 "data_offset": 0, 00:09:59.450 "data_size": 0 00:09:59.450 }, 00:09:59.450 { 00:09:59.450 "name": "BaseBdev3", 00:09:59.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.450 "is_configured": false, 00:09:59.450 "data_offset": 0, 00:09:59.450 "data_size": 0 00:09:59.450 } 00:09:59.450 ] 00:09:59.450 }' 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.450 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.718 [2024-11-27 09:47:00.809865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.718 BaseBdev2 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.718 [ 00:09:59.718 { 00:09:59.718 "name": "BaseBdev2", 00:09:59.718 "aliases": [ 00:09:59.718 "aaf89cbe-0367-4b52-a2e8-35e08c037748" 00:09:59.718 ], 00:09:59.718 "product_name": "Malloc disk", 00:09:59.718 "block_size": 512, 00:09:59.718 "num_blocks": 65536, 00:09:59.718 "uuid": "aaf89cbe-0367-4b52-a2e8-35e08c037748", 00:09:59.718 "assigned_rate_limits": { 00:09:59.718 "rw_ios_per_sec": 0, 00:09:59.718 "rw_mbytes_per_sec": 0, 00:09:59.718 "r_mbytes_per_sec": 0, 00:09:59.718 "w_mbytes_per_sec": 0 00:09:59.718 }, 00:09:59.718 "claimed": true, 00:09:59.718 "claim_type": "exclusive_write", 00:09:59.718 "zoned": false, 00:09:59.718 "supported_io_types": { 00:09:59.718 "read": true, 00:09:59.718 "write": true, 00:09:59.718 "unmap": true, 00:09:59.718 "flush": true, 00:09:59.718 "reset": true, 00:09:59.718 "nvme_admin": false, 00:09:59.718 "nvme_io": false, 00:09:59.718 "nvme_io_md": false, 00:09:59.718 "write_zeroes": true, 00:09:59.718 "zcopy": true, 00:09:59.718 "get_zone_info": false, 00:09:59.718 "zone_management": false, 00:09:59.718 "zone_append": false, 00:09:59.718 "compare": false, 00:09:59.718 "compare_and_write": false, 00:09:59.718 "abort": true, 00:09:59.718 "seek_hole": false, 00:09:59.718 "seek_data": false, 00:09:59.718 "copy": true, 00:09:59.718 "nvme_iov_md": false 00:09:59.718 }, 00:09:59.718 "memory_domains": [ 00:09:59.718 { 00:09:59.718 "dma_device_id": "system", 00:09:59.718 "dma_device_type": 1 00:09:59.718 }, 00:09:59.718 { 00:09:59.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.718 "dma_device_type": 2 00:09:59.718 } 00:09:59.718 ], 00:09:59.718 "driver_specific": {} 00:09:59.718 } 00:09:59.718 ] 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.718 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.978 "name": "Existed_Raid", 00:09:59.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.978 "strip_size_kb": 64, 00:09:59.978 "state": "configuring", 00:09:59.978 "raid_level": "raid0", 00:09:59.978 "superblock": false, 00:09:59.978 "num_base_bdevs": 3, 00:09:59.978 "num_base_bdevs_discovered": 2, 00:09:59.978 "num_base_bdevs_operational": 3, 00:09:59.978 "base_bdevs_list": [ 00:09:59.978 { 00:09:59.978 "name": "BaseBdev1", 00:09:59.978 "uuid": "0ff2e6b6-980f-478f-b140-e3f130d9bd38", 00:09:59.978 "is_configured": true, 00:09:59.978 "data_offset": 0, 00:09:59.978 "data_size": 65536 00:09:59.978 }, 00:09:59.978 { 00:09:59.978 "name": "BaseBdev2", 00:09:59.978 "uuid": "aaf89cbe-0367-4b52-a2e8-35e08c037748", 00:09:59.978 "is_configured": true, 00:09:59.978 "data_offset": 0, 00:09:59.978 "data_size": 65536 00:09:59.978 }, 00:09:59.978 { 00:09:59.978 "name": "BaseBdev3", 00:09:59.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.978 "is_configured": false, 00:09:59.978 "data_offset": 0, 00:09:59.978 "data_size": 0 00:09:59.978 } 00:09:59.978 ] 00:09:59.978 }' 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.978 09:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.237 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.237 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.237 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.237 [2024-11-27 09:47:01.355770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.237 [2024-11-27 09:47:01.355900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.237 [2024-11-27 09:47:01.355923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:00.237 [2024-11-27 09:47:01.356280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.237 [2024-11-27 09:47:01.356504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.237 [2024-11-27 09:47:01.356516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.238 [2024-11-27 09:47:01.356844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.238 BaseBdev3 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.238 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.498 [ 00:10:00.498 { 00:10:00.498 "name": "BaseBdev3", 00:10:00.498 "aliases": [ 00:10:00.498 "0f0d3399-83cb-4da8-a21c-a2d8779be854" 00:10:00.498 ], 00:10:00.498 "product_name": "Malloc disk", 00:10:00.498 "block_size": 512, 00:10:00.498 "num_blocks": 65536, 00:10:00.498 "uuid": "0f0d3399-83cb-4da8-a21c-a2d8779be854", 00:10:00.498 "assigned_rate_limits": { 00:10:00.498 "rw_ios_per_sec": 0, 00:10:00.498 "rw_mbytes_per_sec": 0, 00:10:00.498 "r_mbytes_per_sec": 0, 00:10:00.498 "w_mbytes_per_sec": 0 00:10:00.498 }, 00:10:00.498 "claimed": true, 00:10:00.498 "claim_type": "exclusive_write", 00:10:00.498 "zoned": false, 00:10:00.498 "supported_io_types": { 00:10:00.498 "read": true, 00:10:00.498 "write": true, 00:10:00.498 "unmap": true, 00:10:00.498 "flush": true, 00:10:00.498 "reset": true, 00:10:00.498 "nvme_admin": false, 00:10:00.498 "nvme_io": false, 00:10:00.498 "nvme_io_md": false, 00:10:00.498 "write_zeroes": true, 00:10:00.498 "zcopy": true, 00:10:00.498 "get_zone_info": false, 00:10:00.498 "zone_management": false, 00:10:00.498 "zone_append": false, 00:10:00.498 "compare": false, 00:10:00.498 "compare_and_write": false, 00:10:00.498 "abort": true, 00:10:00.498 "seek_hole": false, 00:10:00.498 "seek_data": false, 00:10:00.498 "copy": true, 00:10:00.498 "nvme_iov_md": false 00:10:00.498 }, 00:10:00.498 "memory_domains": [ 00:10:00.498 { 00:10:00.498 "dma_device_id": "system", 00:10:00.498 "dma_device_type": 1 00:10:00.498 }, 00:10:00.498 { 00:10:00.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.498 "dma_device_type": 2 00:10:00.498 } 00:10:00.498 ], 00:10:00.498 "driver_specific": {} 00:10:00.498 } 00:10:00.498 ] 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.498 "name": "Existed_Raid", 00:10:00.498 "uuid": "4f99b089-9472-4cf1-8168-ed063f51c0a5", 00:10:00.498 "strip_size_kb": 64, 00:10:00.498 "state": "online", 00:10:00.498 "raid_level": "raid0", 00:10:00.498 "superblock": false, 00:10:00.498 "num_base_bdevs": 3, 00:10:00.498 "num_base_bdevs_discovered": 3, 00:10:00.498 "num_base_bdevs_operational": 3, 00:10:00.498 "base_bdevs_list": [ 00:10:00.498 { 00:10:00.498 "name": "BaseBdev1", 00:10:00.498 "uuid": "0ff2e6b6-980f-478f-b140-e3f130d9bd38", 00:10:00.498 "is_configured": true, 00:10:00.498 "data_offset": 0, 00:10:00.498 "data_size": 65536 00:10:00.498 }, 00:10:00.498 { 00:10:00.498 "name": "BaseBdev2", 00:10:00.498 "uuid": "aaf89cbe-0367-4b52-a2e8-35e08c037748", 00:10:00.498 "is_configured": true, 00:10:00.498 "data_offset": 0, 00:10:00.498 "data_size": 65536 00:10:00.498 }, 00:10:00.498 { 00:10:00.498 "name": "BaseBdev3", 00:10:00.498 "uuid": "0f0d3399-83cb-4da8-a21c-a2d8779be854", 00:10:00.498 "is_configured": true, 00:10:00.498 "data_offset": 0, 00:10:00.498 "data_size": 65536 00:10:00.498 } 00:10:00.498 ] 00:10:00.498 }' 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.498 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.759 [2024-11-27 09:47:01.855395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:00.759 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.020 "name": "Existed_Raid", 00:10:01.020 "aliases": [ 00:10:01.020 "4f99b089-9472-4cf1-8168-ed063f51c0a5" 00:10:01.020 ], 00:10:01.020 "product_name": "Raid Volume", 00:10:01.020 "block_size": 512, 00:10:01.020 "num_blocks": 196608, 00:10:01.020 "uuid": "4f99b089-9472-4cf1-8168-ed063f51c0a5", 00:10:01.020 "assigned_rate_limits": { 00:10:01.020 "rw_ios_per_sec": 0, 00:10:01.020 "rw_mbytes_per_sec": 0, 00:10:01.020 "r_mbytes_per_sec": 0, 00:10:01.020 "w_mbytes_per_sec": 0 00:10:01.020 }, 00:10:01.020 "claimed": false, 00:10:01.020 "zoned": false, 00:10:01.020 "supported_io_types": { 00:10:01.020 "read": true, 00:10:01.020 "write": true, 00:10:01.020 "unmap": true, 00:10:01.020 "flush": true, 00:10:01.020 "reset": true, 00:10:01.020 "nvme_admin": false, 00:10:01.020 "nvme_io": false, 00:10:01.020 "nvme_io_md": false, 00:10:01.020 "write_zeroes": true, 00:10:01.020 "zcopy": false, 00:10:01.020 "get_zone_info": false, 00:10:01.020 "zone_management": false, 00:10:01.020 "zone_append": false, 00:10:01.020 "compare": false, 00:10:01.020 "compare_and_write": false, 00:10:01.020 "abort": false, 00:10:01.020 "seek_hole": false, 00:10:01.020 "seek_data": false, 00:10:01.020 "copy": false, 00:10:01.020 "nvme_iov_md": false 00:10:01.020 }, 00:10:01.020 "memory_domains": [ 00:10:01.020 { 00:10:01.020 "dma_device_id": "system", 00:10:01.020 "dma_device_type": 1 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.020 "dma_device_type": 2 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "dma_device_id": "system", 00:10:01.020 "dma_device_type": 1 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.020 "dma_device_type": 2 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "dma_device_id": "system", 00:10:01.020 "dma_device_type": 1 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.020 "dma_device_type": 2 00:10:01.020 } 00:10:01.020 ], 00:10:01.020 "driver_specific": { 00:10:01.020 "raid": { 00:10:01.020 "uuid": "4f99b089-9472-4cf1-8168-ed063f51c0a5", 00:10:01.020 "strip_size_kb": 64, 00:10:01.020 "state": "online", 00:10:01.020 "raid_level": "raid0", 00:10:01.020 "superblock": false, 00:10:01.020 "num_base_bdevs": 3, 00:10:01.020 "num_base_bdevs_discovered": 3, 00:10:01.020 "num_base_bdevs_operational": 3, 00:10:01.020 "base_bdevs_list": [ 00:10:01.020 { 00:10:01.020 "name": "BaseBdev1", 00:10:01.020 "uuid": "0ff2e6b6-980f-478f-b140-e3f130d9bd38", 00:10:01.020 "is_configured": true, 00:10:01.020 "data_offset": 0, 00:10:01.020 "data_size": 65536 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "name": "BaseBdev2", 00:10:01.020 "uuid": "aaf89cbe-0367-4b52-a2e8-35e08c037748", 00:10:01.020 "is_configured": true, 00:10:01.020 "data_offset": 0, 00:10:01.020 "data_size": 65536 00:10:01.020 }, 00:10:01.020 { 00:10:01.020 "name": "BaseBdev3", 00:10:01.020 "uuid": "0f0d3399-83cb-4da8-a21c-a2d8779be854", 00:10:01.020 "is_configured": true, 00:10:01.020 "data_offset": 0, 00:10:01.020 "data_size": 65536 00:10:01.020 } 00:10:01.020 ] 00:10:01.020 } 00:10:01.020 } 00:10:01.020 }' 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.020 BaseBdev2 00:10:01.020 BaseBdev3' 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.020 09:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.020 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.020 [2024-11-27 09:47:02.134653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.021 [2024-11-27 09:47:02.134767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.021 [2024-11-27 09:47:02.134857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.281 "name": "Existed_Raid", 00:10:01.281 "uuid": "4f99b089-9472-4cf1-8168-ed063f51c0a5", 00:10:01.281 "strip_size_kb": 64, 00:10:01.281 "state": "offline", 00:10:01.281 "raid_level": "raid0", 00:10:01.281 "superblock": false, 00:10:01.281 "num_base_bdevs": 3, 00:10:01.281 "num_base_bdevs_discovered": 2, 00:10:01.281 "num_base_bdevs_operational": 2, 00:10:01.281 "base_bdevs_list": [ 00:10:01.281 { 00:10:01.281 "name": null, 00:10:01.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.281 "is_configured": false, 00:10:01.281 "data_offset": 0, 00:10:01.281 "data_size": 65536 00:10:01.281 }, 00:10:01.281 { 00:10:01.281 "name": "BaseBdev2", 00:10:01.281 "uuid": "aaf89cbe-0367-4b52-a2e8-35e08c037748", 00:10:01.281 "is_configured": true, 00:10:01.281 "data_offset": 0, 00:10:01.281 "data_size": 65536 00:10:01.281 }, 00:10:01.281 { 00:10:01.281 "name": "BaseBdev3", 00:10:01.281 "uuid": "0f0d3399-83cb-4da8-a21c-a2d8779be854", 00:10:01.281 "is_configured": true, 00:10:01.281 "data_offset": 0, 00:10:01.281 "data_size": 65536 00:10:01.281 } 00:10:01.281 ] 00:10:01.281 }' 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.281 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.542 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.802 [2024-11-27 09:47:02.680367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.802 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.802 [2024-11-27 09:47:02.846463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.802 [2024-11-27 09:47:02.846577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 09:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 BaseBdev2 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 [ 00:10:02.064 { 00:10:02.064 "name": "BaseBdev2", 00:10:02.064 "aliases": [ 00:10:02.064 "69b989c4-b4eb-4374-b47f-cfffe647f4d5" 00:10:02.064 ], 00:10:02.064 "product_name": "Malloc disk", 00:10:02.064 "block_size": 512, 00:10:02.064 "num_blocks": 65536, 00:10:02.064 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:02.064 "assigned_rate_limits": { 00:10:02.064 "rw_ios_per_sec": 0, 00:10:02.064 "rw_mbytes_per_sec": 0, 00:10:02.064 "r_mbytes_per_sec": 0, 00:10:02.064 "w_mbytes_per_sec": 0 00:10:02.064 }, 00:10:02.064 "claimed": false, 00:10:02.064 "zoned": false, 00:10:02.064 "supported_io_types": { 00:10:02.064 "read": true, 00:10:02.064 "write": true, 00:10:02.064 "unmap": true, 00:10:02.064 "flush": true, 00:10:02.064 "reset": true, 00:10:02.064 "nvme_admin": false, 00:10:02.064 "nvme_io": false, 00:10:02.064 "nvme_io_md": false, 00:10:02.064 "write_zeroes": true, 00:10:02.064 "zcopy": true, 00:10:02.064 "get_zone_info": false, 00:10:02.064 "zone_management": false, 00:10:02.064 "zone_append": false, 00:10:02.064 "compare": false, 00:10:02.064 "compare_and_write": false, 00:10:02.064 "abort": true, 00:10:02.064 "seek_hole": false, 00:10:02.064 "seek_data": false, 00:10:02.064 "copy": true, 00:10:02.064 "nvme_iov_md": false 00:10:02.064 }, 00:10:02.064 "memory_domains": [ 00:10:02.064 { 00:10:02.064 "dma_device_id": "system", 00:10:02.064 "dma_device_type": 1 00:10:02.064 }, 00:10:02.064 { 00:10:02.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.064 "dma_device_type": 2 00:10:02.064 } 00:10:02.064 ], 00:10:02.064 "driver_specific": {} 00:10:02.064 } 00:10:02.064 ] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 BaseBdev3 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.064 [ 00:10:02.064 { 00:10:02.064 "name": "BaseBdev3", 00:10:02.064 "aliases": [ 00:10:02.064 "33260954-20d0-4080-9dcd-87de993421e8" 00:10:02.064 ], 00:10:02.064 "product_name": "Malloc disk", 00:10:02.064 "block_size": 512, 00:10:02.064 "num_blocks": 65536, 00:10:02.064 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:02.064 "assigned_rate_limits": { 00:10:02.064 "rw_ios_per_sec": 0, 00:10:02.064 "rw_mbytes_per_sec": 0, 00:10:02.064 "r_mbytes_per_sec": 0, 00:10:02.064 "w_mbytes_per_sec": 0 00:10:02.064 }, 00:10:02.064 "claimed": false, 00:10:02.064 "zoned": false, 00:10:02.064 "supported_io_types": { 00:10:02.064 "read": true, 00:10:02.064 "write": true, 00:10:02.064 "unmap": true, 00:10:02.064 "flush": true, 00:10:02.064 "reset": true, 00:10:02.064 "nvme_admin": false, 00:10:02.064 "nvme_io": false, 00:10:02.064 "nvme_io_md": false, 00:10:02.064 "write_zeroes": true, 00:10:02.064 "zcopy": true, 00:10:02.064 "get_zone_info": false, 00:10:02.064 "zone_management": false, 00:10:02.064 "zone_append": false, 00:10:02.064 "compare": false, 00:10:02.064 "compare_and_write": false, 00:10:02.064 "abort": true, 00:10:02.064 "seek_hole": false, 00:10:02.064 "seek_data": false, 00:10:02.064 "copy": true, 00:10:02.064 "nvme_iov_md": false 00:10:02.064 }, 00:10:02.064 "memory_domains": [ 00:10:02.064 { 00:10:02.064 "dma_device_id": "system", 00:10:02.064 "dma_device_type": 1 00:10:02.064 }, 00:10:02.064 { 00:10:02.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.064 "dma_device_type": 2 00:10:02.064 } 00:10:02.064 ], 00:10:02.064 "driver_specific": {} 00:10:02.064 } 00:10:02.064 ] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.064 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.065 [2024-11-27 09:47:03.184289] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.065 [2024-11-27 09:47:03.184398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.065 [2024-11-27 09:47:03.184455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.065 [2024-11-27 09:47:03.186660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.065 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.324 "name": "Existed_Raid", 00:10:02.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.324 "strip_size_kb": 64, 00:10:02.324 "state": "configuring", 00:10:02.324 "raid_level": "raid0", 00:10:02.324 "superblock": false, 00:10:02.324 "num_base_bdevs": 3, 00:10:02.324 "num_base_bdevs_discovered": 2, 00:10:02.324 "num_base_bdevs_operational": 3, 00:10:02.324 "base_bdevs_list": [ 00:10:02.324 { 00:10:02.324 "name": "BaseBdev1", 00:10:02.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.324 "is_configured": false, 00:10:02.324 "data_offset": 0, 00:10:02.324 "data_size": 0 00:10:02.324 }, 00:10:02.324 { 00:10:02.324 "name": "BaseBdev2", 00:10:02.324 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:02.324 "is_configured": true, 00:10:02.324 "data_offset": 0, 00:10:02.324 "data_size": 65536 00:10:02.324 }, 00:10:02.324 { 00:10:02.324 "name": "BaseBdev3", 00:10:02.324 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:02.324 "is_configured": true, 00:10:02.324 "data_offset": 0, 00:10:02.324 "data_size": 65536 00:10:02.324 } 00:10:02.324 ] 00:10:02.324 }' 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.324 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.584 [2024-11-27 09:47:03.615610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.584 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.584 "name": "Existed_Raid", 00:10:02.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.584 "strip_size_kb": 64, 00:10:02.584 "state": "configuring", 00:10:02.584 "raid_level": "raid0", 00:10:02.584 "superblock": false, 00:10:02.584 "num_base_bdevs": 3, 00:10:02.584 "num_base_bdevs_discovered": 1, 00:10:02.584 "num_base_bdevs_operational": 3, 00:10:02.584 "base_bdevs_list": [ 00:10:02.584 { 00:10:02.584 "name": "BaseBdev1", 00:10:02.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.584 "is_configured": false, 00:10:02.584 "data_offset": 0, 00:10:02.584 "data_size": 0 00:10:02.584 }, 00:10:02.584 { 00:10:02.584 "name": null, 00:10:02.584 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:02.584 "is_configured": false, 00:10:02.584 "data_offset": 0, 00:10:02.584 "data_size": 65536 00:10:02.584 }, 00:10:02.584 { 00:10:02.585 "name": "BaseBdev3", 00:10:02.585 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:02.585 "is_configured": true, 00:10:02.585 "data_offset": 0, 00:10:02.585 "data_size": 65536 00:10:02.585 } 00:10:02.585 ] 00:10:02.585 }' 00:10:02.585 09:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.585 09:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 [2024-11-27 09:47:04.122483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.154 BaseBdev1 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.154 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.154 [ 00:10:03.155 { 00:10:03.155 "name": "BaseBdev1", 00:10:03.155 "aliases": [ 00:10:03.155 "1463511b-3a57-4549-8646-d53839bd6e7b" 00:10:03.155 ], 00:10:03.155 "product_name": "Malloc disk", 00:10:03.155 "block_size": 512, 00:10:03.155 "num_blocks": 65536, 00:10:03.155 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:03.155 "assigned_rate_limits": { 00:10:03.155 "rw_ios_per_sec": 0, 00:10:03.155 "rw_mbytes_per_sec": 0, 00:10:03.155 "r_mbytes_per_sec": 0, 00:10:03.155 "w_mbytes_per_sec": 0 00:10:03.155 }, 00:10:03.155 "claimed": true, 00:10:03.155 "claim_type": "exclusive_write", 00:10:03.155 "zoned": false, 00:10:03.155 "supported_io_types": { 00:10:03.155 "read": true, 00:10:03.155 "write": true, 00:10:03.155 "unmap": true, 00:10:03.155 "flush": true, 00:10:03.155 "reset": true, 00:10:03.155 "nvme_admin": false, 00:10:03.155 "nvme_io": false, 00:10:03.155 "nvme_io_md": false, 00:10:03.155 "write_zeroes": true, 00:10:03.155 "zcopy": true, 00:10:03.155 "get_zone_info": false, 00:10:03.155 "zone_management": false, 00:10:03.155 "zone_append": false, 00:10:03.155 "compare": false, 00:10:03.155 "compare_and_write": false, 00:10:03.155 "abort": true, 00:10:03.155 "seek_hole": false, 00:10:03.155 "seek_data": false, 00:10:03.155 "copy": true, 00:10:03.155 "nvme_iov_md": false 00:10:03.155 }, 00:10:03.155 "memory_domains": [ 00:10:03.155 { 00:10:03.155 "dma_device_id": "system", 00:10:03.155 "dma_device_type": 1 00:10:03.155 }, 00:10:03.155 { 00:10:03.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.155 "dma_device_type": 2 00:10:03.155 } 00:10:03.155 ], 00:10:03.155 "driver_specific": {} 00:10:03.155 } 00:10:03.155 ] 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.155 "name": "Existed_Raid", 00:10:03.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.155 "strip_size_kb": 64, 00:10:03.155 "state": "configuring", 00:10:03.155 "raid_level": "raid0", 00:10:03.155 "superblock": false, 00:10:03.155 "num_base_bdevs": 3, 00:10:03.155 "num_base_bdevs_discovered": 2, 00:10:03.155 "num_base_bdevs_operational": 3, 00:10:03.155 "base_bdevs_list": [ 00:10:03.155 { 00:10:03.155 "name": "BaseBdev1", 00:10:03.155 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:03.155 "is_configured": true, 00:10:03.155 "data_offset": 0, 00:10:03.155 "data_size": 65536 00:10:03.155 }, 00:10:03.155 { 00:10:03.155 "name": null, 00:10:03.155 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:03.155 "is_configured": false, 00:10:03.155 "data_offset": 0, 00:10:03.155 "data_size": 65536 00:10:03.155 }, 00:10:03.155 { 00:10:03.155 "name": "BaseBdev3", 00:10:03.155 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:03.155 "is_configured": true, 00:10:03.155 "data_offset": 0, 00:10:03.155 "data_size": 65536 00:10:03.155 } 00:10:03.155 ] 00:10:03.155 }' 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.155 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.725 [2024-11-27 09:47:04.645656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.725 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.726 "name": "Existed_Raid", 00:10:03.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.726 "strip_size_kb": 64, 00:10:03.726 "state": "configuring", 00:10:03.726 "raid_level": "raid0", 00:10:03.726 "superblock": false, 00:10:03.726 "num_base_bdevs": 3, 00:10:03.726 "num_base_bdevs_discovered": 1, 00:10:03.726 "num_base_bdevs_operational": 3, 00:10:03.726 "base_bdevs_list": [ 00:10:03.726 { 00:10:03.726 "name": "BaseBdev1", 00:10:03.726 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:03.726 "is_configured": true, 00:10:03.726 "data_offset": 0, 00:10:03.726 "data_size": 65536 00:10:03.726 }, 00:10:03.726 { 00:10:03.726 "name": null, 00:10:03.726 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:03.726 "is_configured": false, 00:10:03.726 "data_offset": 0, 00:10:03.726 "data_size": 65536 00:10:03.726 }, 00:10:03.726 { 00:10:03.726 "name": null, 00:10:03.726 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:03.726 "is_configured": false, 00:10:03.726 "data_offset": 0, 00:10:03.726 "data_size": 65536 00:10:03.726 } 00:10:03.726 ] 00:10:03.726 }' 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.726 09:47:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.986 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.986 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.986 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.986 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.986 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.247 [2024-11-27 09:47:05.140896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.247 "name": "Existed_Raid", 00:10:04.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.247 "strip_size_kb": 64, 00:10:04.247 "state": "configuring", 00:10:04.247 "raid_level": "raid0", 00:10:04.247 "superblock": false, 00:10:04.247 "num_base_bdevs": 3, 00:10:04.247 "num_base_bdevs_discovered": 2, 00:10:04.247 "num_base_bdevs_operational": 3, 00:10:04.247 "base_bdevs_list": [ 00:10:04.247 { 00:10:04.247 "name": "BaseBdev1", 00:10:04.247 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:04.247 "is_configured": true, 00:10:04.247 "data_offset": 0, 00:10:04.247 "data_size": 65536 00:10:04.247 }, 00:10:04.247 { 00:10:04.247 "name": null, 00:10:04.247 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:04.247 "is_configured": false, 00:10:04.247 "data_offset": 0, 00:10:04.247 "data_size": 65536 00:10:04.247 }, 00:10:04.247 { 00:10:04.247 "name": "BaseBdev3", 00:10:04.247 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:04.247 "is_configured": true, 00:10:04.247 "data_offset": 0, 00:10:04.247 "data_size": 65536 00:10:04.247 } 00:10:04.247 ] 00:10:04.247 }' 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.247 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.507 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.507 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.507 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.507 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.507 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.766 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:04.766 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.766 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.766 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.766 [2024-11-27 09:47:05.660164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.767 "name": "Existed_Raid", 00:10:04.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.767 "strip_size_kb": 64, 00:10:04.767 "state": "configuring", 00:10:04.767 "raid_level": "raid0", 00:10:04.767 "superblock": false, 00:10:04.767 "num_base_bdevs": 3, 00:10:04.767 "num_base_bdevs_discovered": 1, 00:10:04.767 "num_base_bdevs_operational": 3, 00:10:04.767 "base_bdevs_list": [ 00:10:04.767 { 00:10:04.767 "name": null, 00:10:04.767 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:04.767 "is_configured": false, 00:10:04.767 "data_offset": 0, 00:10:04.767 "data_size": 65536 00:10:04.767 }, 00:10:04.767 { 00:10:04.767 "name": null, 00:10:04.767 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:04.767 "is_configured": false, 00:10:04.767 "data_offset": 0, 00:10:04.767 "data_size": 65536 00:10:04.767 }, 00:10:04.767 { 00:10:04.767 "name": "BaseBdev3", 00:10:04.767 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:04.767 "is_configured": true, 00:10:04.767 "data_offset": 0, 00:10:04.767 "data_size": 65536 00:10:04.767 } 00:10:04.767 ] 00:10:04.767 }' 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.767 09:47:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 [2024-11-27 09:47:06.240750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.338 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.338 "name": "Existed_Raid", 00:10:05.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.338 "strip_size_kb": 64, 00:10:05.338 "state": "configuring", 00:10:05.338 "raid_level": "raid0", 00:10:05.338 "superblock": false, 00:10:05.338 "num_base_bdevs": 3, 00:10:05.338 "num_base_bdevs_discovered": 2, 00:10:05.338 "num_base_bdevs_operational": 3, 00:10:05.338 "base_bdevs_list": [ 00:10:05.338 { 00:10:05.338 "name": null, 00:10:05.338 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:05.338 "is_configured": false, 00:10:05.338 "data_offset": 0, 00:10:05.338 "data_size": 65536 00:10:05.338 }, 00:10:05.338 { 00:10:05.338 "name": "BaseBdev2", 00:10:05.338 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:05.338 "is_configured": true, 00:10:05.338 "data_offset": 0, 00:10:05.338 "data_size": 65536 00:10:05.338 }, 00:10:05.338 { 00:10:05.338 "name": "BaseBdev3", 00:10:05.338 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:05.338 "is_configured": true, 00:10:05.338 "data_offset": 0, 00:10:05.339 "data_size": 65536 00:10:05.339 } 00:10:05.339 ] 00:10:05.339 }' 00:10:05.339 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.339 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.598 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.598 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.598 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.599 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.599 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1463511b-3a57-4549-8646-d53839bd6e7b 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.860 [2024-11-27 09:47:06.856141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:05.860 [2024-11-27 09:47:06.856329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:05.860 [2024-11-27 09:47:06.856359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:05.860 [2024-11-27 09:47:06.856746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:05.860 [2024-11-27 09:47:06.856990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:05.860 [2024-11-27 09:47:06.857054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:05.860 [2024-11-27 09:47:06.857456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.860 NewBaseBdev 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.860 [ 00:10:05.860 { 00:10:05.860 "name": "NewBaseBdev", 00:10:05.860 "aliases": [ 00:10:05.860 "1463511b-3a57-4549-8646-d53839bd6e7b" 00:10:05.860 ], 00:10:05.860 "product_name": "Malloc disk", 00:10:05.860 "block_size": 512, 00:10:05.860 "num_blocks": 65536, 00:10:05.860 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:05.860 "assigned_rate_limits": { 00:10:05.860 "rw_ios_per_sec": 0, 00:10:05.860 "rw_mbytes_per_sec": 0, 00:10:05.860 "r_mbytes_per_sec": 0, 00:10:05.860 "w_mbytes_per_sec": 0 00:10:05.860 }, 00:10:05.860 "claimed": true, 00:10:05.860 "claim_type": "exclusive_write", 00:10:05.860 "zoned": false, 00:10:05.860 "supported_io_types": { 00:10:05.860 "read": true, 00:10:05.860 "write": true, 00:10:05.860 "unmap": true, 00:10:05.860 "flush": true, 00:10:05.860 "reset": true, 00:10:05.860 "nvme_admin": false, 00:10:05.860 "nvme_io": false, 00:10:05.860 "nvme_io_md": false, 00:10:05.860 "write_zeroes": true, 00:10:05.860 "zcopy": true, 00:10:05.860 "get_zone_info": false, 00:10:05.860 "zone_management": false, 00:10:05.860 "zone_append": false, 00:10:05.860 "compare": false, 00:10:05.860 "compare_and_write": false, 00:10:05.860 "abort": true, 00:10:05.860 "seek_hole": false, 00:10:05.860 "seek_data": false, 00:10:05.860 "copy": true, 00:10:05.860 "nvme_iov_md": false 00:10:05.860 }, 00:10:05.860 "memory_domains": [ 00:10:05.860 { 00:10:05.860 "dma_device_id": "system", 00:10:05.860 "dma_device_type": 1 00:10:05.860 }, 00:10:05.860 { 00:10:05.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.860 "dma_device_type": 2 00:10:05.860 } 00:10:05.860 ], 00:10:05.860 "driver_specific": {} 00:10:05.860 } 00:10:05.860 ] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.860 "name": "Existed_Raid", 00:10:05.860 "uuid": "4f5f8d4e-b95d-4a2c-9003-5fc5f55f94f3", 00:10:05.860 "strip_size_kb": 64, 00:10:05.860 "state": "online", 00:10:05.860 "raid_level": "raid0", 00:10:05.860 "superblock": false, 00:10:05.860 "num_base_bdevs": 3, 00:10:05.860 "num_base_bdevs_discovered": 3, 00:10:05.860 "num_base_bdevs_operational": 3, 00:10:05.860 "base_bdevs_list": [ 00:10:05.860 { 00:10:05.860 "name": "NewBaseBdev", 00:10:05.860 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:05.860 "is_configured": true, 00:10:05.860 "data_offset": 0, 00:10:05.860 "data_size": 65536 00:10:05.860 }, 00:10:05.860 { 00:10:05.860 "name": "BaseBdev2", 00:10:05.860 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:05.860 "is_configured": true, 00:10:05.860 "data_offset": 0, 00:10:05.860 "data_size": 65536 00:10:05.860 }, 00:10:05.860 { 00:10:05.860 "name": "BaseBdev3", 00:10:05.860 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:05.860 "is_configured": true, 00:10:05.860 "data_offset": 0, 00:10:05.860 "data_size": 65536 00:10:05.860 } 00:10:05.860 ] 00:10:05.860 }' 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.860 09:47:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.432 [2024-11-27 09:47:07.331749] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.432 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.432 "name": "Existed_Raid", 00:10:06.432 "aliases": [ 00:10:06.432 "4f5f8d4e-b95d-4a2c-9003-5fc5f55f94f3" 00:10:06.432 ], 00:10:06.432 "product_name": "Raid Volume", 00:10:06.432 "block_size": 512, 00:10:06.432 "num_blocks": 196608, 00:10:06.432 "uuid": "4f5f8d4e-b95d-4a2c-9003-5fc5f55f94f3", 00:10:06.432 "assigned_rate_limits": { 00:10:06.432 "rw_ios_per_sec": 0, 00:10:06.432 "rw_mbytes_per_sec": 0, 00:10:06.432 "r_mbytes_per_sec": 0, 00:10:06.432 "w_mbytes_per_sec": 0 00:10:06.432 }, 00:10:06.432 "claimed": false, 00:10:06.432 "zoned": false, 00:10:06.432 "supported_io_types": { 00:10:06.432 "read": true, 00:10:06.432 "write": true, 00:10:06.432 "unmap": true, 00:10:06.432 "flush": true, 00:10:06.432 "reset": true, 00:10:06.432 "nvme_admin": false, 00:10:06.432 "nvme_io": false, 00:10:06.432 "nvme_io_md": false, 00:10:06.432 "write_zeroes": true, 00:10:06.432 "zcopy": false, 00:10:06.432 "get_zone_info": false, 00:10:06.432 "zone_management": false, 00:10:06.432 "zone_append": false, 00:10:06.432 "compare": false, 00:10:06.432 "compare_and_write": false, 00:10:06.432 "abort": false, 00:10:06.432 "seek_hole": false, 00:10:06.432 "seek_data": false, 00:10:06.432 "copy": false, 00:10:06.432 "nvme_iov_md": false 00:10:06.432 }, 00:10:06.432 "memory_domains": [ 00:10:06.432 { 00:10:06.432 "dma_device_id": "system", 00:10:06.432 "dma_device_type": 1 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.432 "dma_device_type": 2 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "dma_device_id": "system", 00:10:06.432 "dma_device_type": 1 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.432 "dma_device_type": 2 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "dma_device_id": "system", 00:10:06.432 "dma_device_type": 1 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.432 "dma_device_type": 2 00:10:06.432 } 00:10:06.432 ], 00:10:06.432 "driver_specific": { 00:10:06.432 "raid": { 00:10:06.432 "uuid": "4f5f8d4e-b95d-4a2c-9003-5fc5f55f94f3", 00:10:06.432 "strip_size_kb": 64, 00:10:06.432 "state": "online", 00:10:06.432 "raid_level": "raid0", 00:10:06.432 "superblock": false, 00:10:06.432 "num_base_bdevs": 3, 00:10:06.432 "num_base_bdevs_discovered": 3, 00:10:06.432 "num_base_bdevs_operational": 3, 00:10:06.432 "base_bdevs_list": [ 00:10:06.432 { 00:10:06.432 "name": "NewBaseBdev", 00:10:06.432 "uuid": "1463511b-3a57-4549-8646-d53839bd6e7b", 00:10:06.432 "is_configured": true, 00:10:06.432 "data_offset": 0, 00:10:06.432 "data_size": 65536 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "name": "BaseBdev2", 00:10:06.432 "uuid": "69b989c4-b4eb-4374-b47f-cfffe647f4d5", 00:10:06.432 "is_configured": true, 00:10:06.432 "data_offset": 0, 00:10:06.432 "data_size": 65536 00:10:06.432 }, 00:10:06.432 { 00:10:06.432 "name": "BaseBdev3", 00:10:06.433 "uuid": "33260954-20d0-4080-9dcd-87de993421e8", 00:10:06.433 "is_configured": true, 00:10:06.433 "data_offset": 0, 00:10:06.433 "data_size": 65536 00:10:06.433 } 00:10:06.433 ] 00:10:06.433 } 00:10:06.433 } 00:10:06.433 }' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.433 BaseBdev2 00:10:06.433 BaseBdev3' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.433 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.701 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.701 [2024-11-27 09:47:07.627010] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.701 [2024-11-27 09:47:07.627147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.701 [2024-11-27 09:47:07.627281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.701 [2024-11-27 09:47:07.627371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.702 [2024-11-27 09:47:07.627398] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64084 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64084 ']' 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64084 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64084 00:10:06.702 killing process with pid 64084 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64084' 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64084 00:10:06.702 [2024-11-27 09:47:07.677021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.702 09:47:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64084 00:10:06.961 [2024-11-27 09:47:08.015393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.343 00:10:08.343 real 0m10.873s 00:10:08.343 user 0m16.902s 00:10:08.343 sys 0m2.055s 00:10:08.343 ************************************ 00:10:08.343 END TEST raid_state_function_test 00:10:08.343 ************************************ 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.343 09:47:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:08.343 09:47:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:08.343 09:47:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.343 09:47:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.343 ************************************ 00:10:08.343 START TEST raid_state_function_test_sb 00:10:08.343 ************************************ 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:08.343 Process raid pid: 64707 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64707 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64707' 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64707 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64707 ']' 00:10:08.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.343 09:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.343 [2024-11-27 09:47:09.457950] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:08.343 [2024-11-27 09:47:09.458095] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.603 [2024-11-27 09:47:09.637860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.862 [2024-11-27 09:47:09.780362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.122 [2024-11-27 09:47:10.022622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.122 [2024-11-27 09:47:10.022663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.382 [2024-11-27 09:47:10.308674] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.382 [2024-11-27 09:47:10.308805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.382 [2024-11-27 09:47:10.308851] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.382 [2024-11-27 09:47:10.308878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.382 [2024-11-27 09:47:10.308898] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.382 [2024-11-27 09:47:10.308920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.382 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.382 "name": "Existed_Raid", 00:10:09.382 "uuid": "a19ff40d-22f1-4f81-b199-7bbfb5cd8a66", 00:10:09.382 "strip_size_kb": 64, 00:10:09.382 "state": "configuring", 00:10:09.382 "raid_level": "raid0", 00:10:09.382 "superblock": true, 00:10:09.382 "num_base_bdevs": 3, 00:10:09.382 "num_base_bdevs_discovered": 0, 00:10:09.382 "num_base_bdevs_operational": 3, 00:10:09.382 "base_bdevs_list": [ 00:10:09.382 { 00:10:09.382 "name": "BaseBdev1", 00:10:09.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.382 "is_configured": false, 00:10:09.382 "data_offset": 0, 00:10:09.382 "data_size": 0 00:10:09.382 }, 00:10:09.383 { 00:10:09.383 "name": "BaseBdev2", 00:10:09.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.383 "is_configured": false, 00:10:09.383 "data_offset": 0, 00:10:09.383 "data_size": 0 00:10:09.383 }, 00:10:09.383 { 00:10:09.383 "name": "BaseBdev3", 00:10:09.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.383 "is_configured": false, 00:10:09.383 "data_offset": 0, 00:10:09.383 "data_size": 0 00:10:09.383 } 00:10:09.383 ] 00:10:09.383 }' 00:10:09.383 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.383 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 [2024-11-27 09:47:10.739843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.642 [2024-11-27 09:47:10.739933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.642 [2024-11-27 09:47:10.751808] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.642 [2024-11-27 09:47:10.751889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.642 [2024-11-27 09:47:10.751933] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.642 [2024-11-27 09:47:10.751956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.642 [2024-11-27 09:47:10.751974] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.642 [2024-11-27 09:47:10.751995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.642 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.902 [2024-11-27 09:47:10.804873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.902 BaseBdev1 00:10:09.902 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.902 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.902 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:09.902 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.902 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.902 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.903 [ 00:10:09.903 { 00:10:09.903 "name": "BaseBdev1", 00:10:09.903 "aliases": [ 00:10:09.903 "2436095d-bb03-46de-9687-9cfb06abbcc7" 00:10:09.903 ], 00:10:09.903 "product_name": "Malloc disk", 00:10:09.903 "block_size": 512, 00:10:09.903 "num_blocks": 65536, 00:10:09.903 "uuid": "2436095d-bb03-46de-9687-9cfb06abbcc7", 00:10:09.903 "assigned_rate_limits": { 00:10:09.903 "rw_ios_per_sec": 0, 00:10:09.903 "rw_mbytes_per_sec": 0, 00:10:09.903 "r_mbytes_per_sec": 0, 00:10:09.903 "w_mbytes_per_sec": 0 00:10:09.903 }, 00:10:09.903 "claimed": true, 00:10:09.903 "claim_type": "exclusive_write", 00:10:09.903 "zoned": false, 00:10:09.903 "supported_io_types": { 00:10:09.903 "read": true, 00:10:09.903 "write": true, 00:10:09.903 "unmap": true, 00:10:09.903 "flush": true, 00:10:09.903 "reset": true, 00:10:09.903 "nvme_admin": false, 00:10:09.903 "nvme_io": false, 00:10:09.903 "nvme_io_md": false, 00:10:09.903 "write_zeroes": true, 00:10:09.903 "zcopy": true, 00:10:09.903 "get_zone_info": false, 00:10:09.903 "zone_management": false, 00:10:09.903 "zone_append": false, 00:10:09.903 "compare": false, 00:10:09.903 "compare_and_write": false, 00:10:09.903 "abort": true, 00:10:09.903 "seek_hole": false, 00:10:09.903 "seek_data": false, 00:10:09.903 "copy": true, 00:10:09.903 "nvme_iov_md": false 00:10:09.903 }, 00:10:09.903 "memory_domains": [ 00:10:09.903 { 00:10:09.903 "dma_device_id": "system", 00:10:09.903 "dma_device_type": 1 00:10:09.903 }, 00:10:09.903 { 00:10:09.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.903 "dma_device_type": 2 00:10:09.903 } 00:10:09.903 ], 00:10:09.903 "driver_specific": {} 00:10:09.903 } 00:10:09.903 ] 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.903 "name": "Existed_Raid", 00:10:09.903 "uuid": "66fd6bd9-1750-4255-b86e-1df4d99fe591", 00:10:09.903 "strip_size_kb": 64, 00:10:09.903 "state": "configuring", 00:10:09.903 "raid_level": "raid0", 00:10:09.903 "superblock": true, 00:10:09.903 "num_base_bdevs": 3, 00:10:09.903 "num_base_bdevs_discovered": 1, 00:10:09.903 "num_base_bdevs_operational": 3, 00:10:09.903 "base_bdevs_list": [ 00:10:09.903 { 00:10:09.903 "name": "BaseBdev1", 00:10:09.903 "uuid": "2436095d-bb03-46de-9687-9cfb06abbcc7", 00:10:09.903 "is_configured": true, 00:10:09.903 "data_offset": 2048, 00:10:09.903 "data_size": 63488 00:10:09.903 }, 00:10:09.903 { 00:10:09.903 "name": "BaseBdev2", 00:10:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.903 "is_configured": false, 00:10:09.903 "data_offset": 0, 00:10:09.903 "data_size": 0 00:10:09.903 }, 00:10:09.903 { 00:10:09.903 "name": "BaseBdev3", 00:10:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.903 "is_configured": false, 00:10:09.903 "data_offset": 0, 00:10:09.903 "data_size": 0 00:10:09.903 } 00:10:09.903 ] 00:10:09.903 }' 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.903 09:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.472 [2024-11-27 09:47:11.308130] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:10.472 [2024-11-27 09:47:11.308244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.472 [2024-11-27 09:47:11.320159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:10.472 [2024-11-27 09:47:11.322396] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.472 [2024-11-27 09:47:11.322497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.472 [2024-11-27 09:47:11.322529] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.472 [2024-11-27 09:47:11.322552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.472 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.472 "name": "Existed_Raid", 00:10:10.472 "uuid": "eea6ace6-696e-445b-92bb-15f0f40bdaff", 00:10:10.472 "strip_size_kb": 64, 00:10:10.472 "state": "configuring", 00:10:10.472 "raid_level": "raid0", 00:10:10.472 "superblock": true, 00:10:10.472 "num_base_bdevs": 3, 00:10:10.472 "num_base_bdevs_discovered": 1, 00:10:10.472 "num_base_bdevs_operational": 3, 00:10:10.472 "base_bdevs_list": [ 00:10:10.472 { 00:10:10.472 "name": "BaseBdev1", 00:10:10.472 "uuid": "2436095d-bb03-46de-9687-9cfb06abbcc7", 00:10:10.472 "is_configured": true, 00:10:10.472 "data_offset": 2048, 00:10:10.472 "data_size": 63488 00:10:10.472 }, 00:10:10.472 { 00:10:10.472 "name": "BaseBdev2", 00:10:10.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.472 "is_configured": false, 00:10:10.472 "data_offset": 0, 00:10:10.472 "data_size": 0 00:10:10.472 }, 00:10:10.473 { 00:10:10.473 "name": "BaseBdev3", 00:10:10.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.473 "is_configured": false, 00:10:10.473 "data_offset": 0, 00:10:10.473 "data_size": 0 00:10:10.473 } 00:10:10.473 ] 00:10:10.473 }' 00:10:10.473 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.473 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.733 [2024-11-27 09:47:11.785551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.733 BaseBdev2 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.733 [ 00:10:10.733 { 00:10:10.733 "name": "BaseBdev2", 00:10:10.733 "aliases": [ 00:10:10.733 "35e8f38e-dae3-444c-a34d-775e67220324" 00:10:10.733 ], 00:10:10.733 "product_name": "Malloc disk", 00:10:10.733 "block_size": 512, 00:10:10.733 "num_blocks": 65536, 00:10:10.733 "uuid": "35e8f38e-dae3-444c-a34d-775e67220324", 00:10:10.733 "assigned_rate_limits": { 00:10:10.733 "rw_ios_per_sec": 0, 00:10:10.733 "rw_mbytes_per_sec": 0, 00:10:10.733 "r_mbytes_per_sec": 0, 00:10:10.733 "w_mbytes_per_sec": 0 00:10:10.733 }, 00:10:10.733 "claimed": true, 00:10:10.733 "claim_type": "exclusive_write", 00:10:10.733 "zoned": false, 00:10:10.733 "supported_io_types": { 00:10:10.733 "read": true, 00:10:10.733 "write": true, 00:10:10.733 "unmap": true, 00:10:10.733 "flush": true, 00:10:10.733 "reset": true, 00:10:10.733 "nvme_admin": false, 00:10:10.733 "nvme_io": false, 00:10:10.733 "nvme_io_md": false, 00:10:10.733 "write_zeroes": true, 00:10:10.733 "zcopy": true, 00:10:10.733 "get_zone_info": false, 00:10:10.733 "zone_management": false, 00:10:10.733 "zone_append": false, 00:10:10.733 "compare": false, 00:10:10.733 "compare_and_write": false, 00:10:10.733 "abort": true, 00:10:10.733 "seek_hole": false, 00:10:10.733 "seek_data": false, 00:10:10.733 "copy": true, 00:10:10.733 "nvme_iov_md": false 00:10:10.733 }, 00:10:10.733 "memory_domains": [ 00:10:10.733 { 00:10:10.733 "dma_device_id": "system", 00:10:10.733 "dma_device_type": 1 00:10:10.733 }, 00:10:10.733 { 00:10:10.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.733 "dma_device_type": 2 00:10:10.733 } 00:10:10.733 ], 00:10:10.733 "driver_specific": {} 00:10:10.733 } 00:10:10.733 ] 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.733 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.991 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.991 "name": "Existed_Raid", 00:10:10.991 "uuid": "eea6ace6-696e-445b-92bb-15f0f40bdaff", 00:10:10.991 "strip_size_kb": 64, 00:10:10.991 "state": "configuring", 00:10:10.991 "raid_level": "raid0", 00:10:10.991 "superblock": true, 00:10:10.991 "num_base_bdevs": 3, 00:10:10.991 "num_base_bdevs_discovered": 2, 00:10:10.991 "num_base_bdevs_operational": 3, 00:10:10.991 "base_bdevs_list": [ 00:10:10.991 { 00:10:10.991 "name": "BaseBdev1", 00:10:10.991 "uuid": "2436095d-bb03-46de-9687-9cfb06abbcc7", 00:10:10.991 "is_configured": true, 00:10:10.991 "data_offset": 2048, 00:10:10.991 "data_size": 63488 00:10:10.991 }, 00:10:10.991 { 00:10:10.991 "name": "BaseBdev2", 00:10:10.991 "uuid": "35e8f38e-dae3-444c-a34d-775e67220324", 00:10:10.991 "is_configured": true, 00:10:10.991 "data_offset": 2048, 00:10:10.991 "data_size": 63488 00:10:10.991 }, 00:10:10.991 { 00:10:10.991 "name": "BaseBdev3", 00:10:10.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.991 "is_configured": false, 00:10:10.991 "data_offset": 0, 00:10:10.991 "data_size": 0 00:10:10.991 } 00:10:10.991 ] 00:10:10.991 }' 00:10:10.991 09:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.991 09:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 [2024-11-27 09:47:12.266161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.250 [2024-11-27 09:47:12.266627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.250 [2024-11-27 09:47:12.266701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:11.250 [2024-11-27 09:47:12.267082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:11.250 [2024-11-27 09:47:12.267302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.250 BaseBdev3 00:10:11.250 [2024-11-27 09:47:12.267355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:11.250 [2024-11-27 09:47:12.267585] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 [ 00:10:11.250 { 00:10:11.250 "name": "BaseBdev3", 00:10:11.250 "aliases": [ 00:10:11.250 "6c95dc87-6137-476a-bb29-c7cc22c69cad" 00:10:11.250 ], 00:10:11.250 "product_name": "Malloc disk", 00:10:11.250 "block_size": 512, 00:10:11.250 "num_blocks": 65536, 00:10:11.250 "uuid": "6c95dc87-6137-476a-bb29-c7cc22c69cad", 00:10:11.250 "assigned_rate_limits": { 00:10:11.250 "rw_ios_per_sec": 0, 00:10:11.250 "rw_mbytes_per_sec": 0, 00:10:11.250 "r_mbytes_per_sec": 0, 00:10:11.250 "w_mbytes_per_sec": 0 00:10:11.250 }, 00:10:11.250 "claimed": true, 00:10:11.250 "claim_type": "exclusive_write", 00:10:11.250 "zoned": false, 00:10:11.250 "supported_io_types": { 00:10:11.250 "read": true, 00:10:11.250 "write": true, 00:10:11.250 "unmap": true, 00:10:11.250 "flush": true, 00:10:11.250 "reset": true, 00:10:11.250 "nvme_admin": false, 00:10:11.250 "nvme_io": false, 00:10:11.250 "nvme_io_md": false, 00:10:11.250 "write_zeroes": true, 00:10:11.250 "zcopy": true, 00:10:11.250 "get_zone_info": false, 00:10:11.250 "zone_management": false, 00:10:11.250 "zone_append": false, 00:10:11.250 "compare": false, 00:10:11.250 "compare_and_write": false, 00:10:11.250 "abort": true, 00:10:11.250 "seek_hole": false, 00:10:11.250 "seek_data": false, 00:10:11.250 "copy": true, 00:10:11.250 "nvme_iov_md": false 00:10:11.250 }, 00:10:11.250 "memory_domains": [ 00:10:11.250 { 00:10:11.250 "dma_device_id": "system", 00:10:11.250 "dma_device_type": 1 00:10:11.250 }, 00:10:11.250 { 00:10:11.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.250 "dma_device_type": 2 00:10:11.250 } 00:10:11.250 ], 00:10:11.250 "driver_specific": {} 00:10:11.250 } 00:10:11.250 ] 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.250 "name": "Existed_Raid", 00:10:11.250 "uuid": "eea6ace6-696e-445b-92bb-15f0f40bdaff", 00:10:11.250 "strip_size_kb": 64, 00:10:11.250 "state": "online", 00:10:11.250 "raid_level": "raid0", 00:10:11.250 "superblock": true, 00:10:11.250 "num_base_bdevs": 3, 00:10:11.250 "num_base_bdevs_discovered": 3, 00:10:11.250 "num_base_bdevs_operational": 3, 00:10:11.250 "base_bdevs_list": [ 00:10:11.250 { 00:10:11.250 "name": "BaseBdev1", 00:10:11.250 "uuid": "2436095d-bb03-46de-9687-9cfb06abbcc7", 00:10:11.250 "is_configured": true, 00:10:11.250 "data_offset": 2048, 00:10:11.250 "data_size": 63488 00:10:11.250 }, 00:10:11.250 { 00:10:11.250 "name": "BaseBdev2", 00:10:11.250 "uuid": "35e8f38e-dae3-444c-a34d-775e67220324", 00:10:11.250 "is_configured": true, 00:10:11.250 "data_offset": 2048, 00:10:11.250 "data_size": 63488 00:10:11.250 }, 00:10:11.250 { 00:10:11.250 "name": "BaseBdev3", 00:10:11.250 "uuid": "6c95dc87-6137-476a-bb29-c7cc22c69cad", 00:10:11.250 "is_configured": true, 00:10:11.250 "data_offset": 2048, 00:10:11.250 "data_size": 63488 00:10:11.250 } 00:10:11.250 ] 00:10:11.250 }' 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.250 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 [2024-11-27 09:47:12.729784] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.820 "name": "Existed_Raid", 00:10:11.820 "aliases": [ 00:10:11.820 "eea6ace6-696e-445b-92bb-15f0f40bdaff" 00:10:11.820 ], 00:10:11.820 "product_name": "Raid Volume", 00:10:11.820 "block_size": 512, 00:10:11.820 "num_blocks": 190464, 00:10:11.820 "uuid": "eea6ace6-696e-445b-92bb-15f0f40bdaff", 00:10:11.820 "assigned_rate_limits": { 00:10:11.820 "rw_ios_per_sec": 0, 00:10:11.820 "rw_mbytes_per_sec": 0, 00:10:11.820 "r_mbytes_per_sec": 0, 00:10:11.820 "w_mbytes_per_sec": 0 00:10:11.820 }, 00:10:11.820 "claimed": false, 00:10:11.820 "zoned": false, 00:10:11.820 "supported_io_types": { 00:10:11.820 "read": true, 00:10:11.820 "write": true, 00:10:11.820 "unmap": true, 00:10:11.820 "flush": true, 00:10:11.820 "reset": true, 00:10:11.820 "nvme_admin": false, 00:10:11.820 "nvme_io": false, 00:10:11.820 "nvme_io_md": false, 00:10:11.820 "write_zeroes": true, 00:10:11.820 "zcopy": false, 00:10:11.820 "get_zone_info": false, 00:10:11.820 "zone_management": false, 00:10:11.820 "zone_append": false, 00:10:11.820 "compare": false, 00:10:11.820 "compare_and_write": false, 00:10:11.820 "abort": false, 00:10:11.820 "seek_hole": false, 00:10:11.820 "seek_data": false, 00:10:11.820 "copy": false, 00:10:11.820 "nvme_iov_md": false 00:10:11.820 }, 00:10:11.820 "memory_domains": [ 00:10:11.820 { 00:10:11.820 "dma_device_id": "system", 00:10:11.820 "dma_device_type": 1 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.820 "dma_device_type": 2 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "dma_device_id": "system", 00:10:11.820 "dma_device_type": 1 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.820 "dma_device_type": 2 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "dma_device_id": "system", 00:10:11.820 "dma_device_type": 1 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.820 "dma_device_type": 2 00:10:11.820 } 00:10:11.820 ], 00:10:11.820 "driver_specific": { 00:10:11.820 "raid": { 00:10:11.820 "uuid": "eea6ace6-696e-445b-92bb-15f0f40bdaff", 00:10:11.820 "strip_size_kb": 64, 00:10:11.820 "state": "online", 00:10:11.820 "raid_level": "raid0", 00:10:11.820 "superblock": true, 00:10:11.820 "num_base_bdevs": 3, 00:10:11.820 "num_base_bdevs_discovered": 3, 00:10:11.820 "num_base_bdevs_operational": 3, 00:10:11.820 "base_bdevs_list": [ 00:10:11.820 { 00:10:11.820 "name": "BaseBdev1", 00:10:11.820 "uuid": "2436095d-bb03-46de-9687-9cfb06abbcc7", 00:10:11.820 "is_configured": true, 00:10:11.820 "data_offset": 2048, 00:10:11.820 "data_size": 63488 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "name": "BaseBdev2", 00:10:11.820 "uuid": "35e8f38e-dae3-444c-a34d-775e67220324", 00:10:11.820 "is_configured": true, 00:10:11.820 "data_offset": 2048, 00:10:11.820 "data_size": 63488 00:10:11.820 }, 00:10:11.820 { 00:10:11.820 "name": "BaseBdev3", 00:10:11.820 "uuid": "6c95dc87-6137-476a-bb29-c7cc22c69cad", 00:10:11.820 "is_configured": true, 00:10:11.820 "data_offset": 2048, 00:10:11.820 "data_size": 63488 00:10:11.820 } 00:10:11.820 ] 00:10:11.820 } 00:10:11.820 } 00:10:11.820 }' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.820 BaseBdev2 00:10:11.820 BaseBdev3' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.820 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.080 09:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.080 [2024-11-27 09:47:13.013080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.080 [2024-11-27 09:47:13.013187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.080 [2024-11-27 09:47:13.013280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.080 "name": "Existed_Raid", 00:10:12.080 "uuid": "eea6ace6-696e-445b-92bb-15f0f40bdaff", 00:10:12.080 "strip_size_kb": 64, 00:10:12.080 "state": "offline", 00:10:12.080 "raid_level": "raid0", 00:10:12.080 "superblock": true, 00:10:12.080 "num_base_bdevs": 3, 00:10:12.080 "num_base_bdevs_discovered": 2, 00:10:12.080 "num_base_bdevs_operational": 2, 00:10:12.080 "base_bdevs_list": [ 00:10:12.080 { 00:10:12.080 "name": null, 00:10:12.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.080 "is_configured": false, 00:10:12.080 "data_offset": 0, 00:10:12.080 "data_size": 63488 00:10:12.080 }, 00:10:12.080 { 00:10:12.080 "name": "BaseBdev2", 00:10:12.080 "uuid": "35e8f38e-dae3-444c-a34d-775e67220324", 00:10:12.080 "is_configured": true, 00:10:12.080 "data_offset": 2048, 00:10:12.080 "data_size": 63488 00:10:12.080 }, 00:10:12.080 { 00:10:12.080 "name": "BaseBdev3", 00:10:12.080 "uuid": "6c95dc87-6137-476a-bb29-c7cc22c69cad", 00:10:12.080 "is_configured": true, 00:10:12.080 "data_offset": 2048, 00:10:12.080 "data_size": 63488 00:10:12.080 } 00:10:12.080 ] 00:10:12.080 }' 00:10:12.080 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.081 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.649 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:12.649 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.650 [2024-11-27 09:47:13.618413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.650 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 [2024-11-27 09:47:13.785970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.910 [2024-11-27 09:47:13.786139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 BaseBdev2 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.910 09:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.910 [ 00:10:12.910 { 00:10:12.910 "name": "BaseBdev2", 00:10:12.910 "aliases": [ 00:10:12.910 "00592072-be59-48f3-aac2-799d50497083" 00:10:12.910 ], 00:10:12.910 "product_name": "Malloc disk", 00:10:12.910 "block_size": 512, 00:10:12.910 "num_blocks": 65536, 00:10:12.910 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:12.910 "assigned_rate_limits": { 00:10:12.910 "rw_ios_per_sec": 0, 00:10:12.910 "rw_mbytes_per_sec": 0, 00:10:12.910 "r_mbytes_per_sec": 0, 00:10:12.910 "w_mbytes_per_sec": 0 00:10:12.910 }, 00:10:12.910 "claimed": false, 00:10:12.910 "zoned": false, 00:10:12.910 "supported_io_types": { 00:10:12.910 "read": true, 00:10:12.910 "write": true, 00:10:12.910 "unmap": true, 00:10:12.910 "flush": true, 00:10:12.910 "reset": true, 00:10:12.910 "nvme_admin": false, 00:10:12.910 "nvme_io": false, 00:10:12.910 "nvme_io_md": false, 00:10:12.910 "write_zeroes": true, 00:10:12.910 "zcopy": true, 00:10:12.910 "get_zone_info": false, 00:10:12.910 "zone_management": false, 00:10:12.910 "zone_append": false, 00:10:12.910 "compare": false, 00:10:12.910 "compare_and_write": false, 00:10:12.910 "abort": true, 00:10:12.910 "seek_hole": false, 00:10:12.910 "seek_data": false, 00:10:12.910 "copy": true, 00:10:12.910 "nvme_iov_md": false 00:10:12.910 }, 00:10:12.910 "memory_domains": [ 00:10:12.910 { 00:10:12.910 "dma_device_id": "system", 00:10:12.910 "dma_device_type": 1 00:10:12.910 }, 00:10:12.910 { 00:10:12.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.910 "dma_device_type": 2 00:10:12.910 } 00:10:12.910 ], 00:10:12.910 "driver_specific": {} 00:10:12.910 } 00:10:12.910 ] 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.910 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.170 BaseBdev3 00:10:13.170 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.170 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.171 [ 00:10:13.171 { 00:10:13.171 "name": "BaseBdev3", 00:10:13.171 "aliases": [ 00:10:13.171 "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1" 00:10:13.171 ], 00:10:13.171 "product_name": "Malloc disk", 00:10:13.171 "block_size": 512, 00:10:13.171 "num_blocks": 65536, 00:10:13.171 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:13.171 "assigned_rate_limits": { 00:10:13.171 "rw_ios_per_sec": 0, 00:10:13.171 "rw_mbytes_per_sec": 0, 00:10:13.171 "r_mbytes_per_sec": 0, 00:10:13.171 "w_mbytes_per_sec": 0 00:10:13.171 }, 00:10:13.171 "claimed": false, 00:10:13.171 "zoned": false, 00:10:13.171 "supported_io_types": { 00:10:13.171 "read": true, 00:10:13.171 "write": true, 00:10:13.171 "unmap": true, 00:10:13.171 "flush": true, 00:10:13.171 "reset": true, 00:10:13.171 "nvme_admin": false, 00:10:13.171 "nvme_io": false, 00:10:13.171 "nvme_io_md": false, 00:10:13.171 "write_zeroes": true, 00:10:13.171 "zcopy": true, 00:10:13.171 "get_zone_info": false, 00:10:13.171 "zone_management": false, 00:10:13.171 "zone_append": false, 00:10:13.171 "compare": false, 00:10:13.171 "compare_and_write": false, 00:10:13.171 "abort": true, 00:10:13.171 "seek_hole": false, 00:10:13.171 "seek_data": false, 00:10:13.171 "copy": true, 00:10:13.171 "nvme_iov_md": false 00:10:13.171 }, 00:10:13.171 "memory_domains": [ 00:10:13.171 { 00:10:13.171 "dma_device_id": "system", 00:10:13.171 "dma_device_type": 1 00:10:13.171 }, 00:10:13.171 { 00:10:13.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.171 "dma_device_type": 2 00:10:13.171 } 00:10:13.171 ], 00:10:13.171 "driver_specific": {} 00:10:13.171 } 00:10:13.171 ] 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.171 [2024-11-27 09:47:14.124823] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.171 [2024-11-27 09:47:14.124918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.171 [2024-11-27 09:47:14.124963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.171 [2024-11-27 09:47:14.127060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.171 "name": "Existed_Raid", 00:10:13.171 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:13.171 "strip_size_kb": 64, 00:10:13.171 "state": "configuring", 00:10:13.171 "raid_level": "raid0", 00:10:13.171 "superblock": true, 00:10:13.171 "num_base_bdevs": 3, 00:10:13.171 "num_base_bdevs_discovered": 2, 00:10:13.171 "num_base_bdevs_operational": 3, 00:10:13.171 "base_bdevs_list": [ 00:10:13.171 { 00:10:13.171 "name": "BaseBdev1", 00:10:13.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.171 "is_configured": false, 00:10:13.171 "data_offset": 0, 00:10:13.171 "data_size": 0 00:10:13.171 }, 00:10:13.171 { 00:10:13.171 "name": "BaseBdev2", 00:10:13.171 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:13.171 "is_configured": true, 00:10:13.171 "data_offset": 2048, 00:10:13.171 "data_size": 63488 00:10:13.171 }, 00:10:13.171 { 00:10:13.171 "name": "BaseBdev3", 00:10:13.171 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:13.171 "is_configured": true, 00:10:13.171 "data_offset": 2048, 00:10:13.171 "data_size": 63488 00:10:13.171 } 00:10:13.171 ] 00:10:13.171 }' 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.171 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.799 [2024-11-27 09:47:14.576148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.799 "name": "Existed_Raid", 00:10:13.799 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:13.799 "strip_size_kb": 64, 00:10:13.799 "state": "configuring", 00:10:13.799 "raid_level": "raid0", 00:10:13.799 "superblock": true, 00:10:13.799 "num_base_bdevs": 3, 00:10:13.799 "num_base_bdevs_discovered": 1, 00:10:13.799 "num_base_bdevs_operational": 3, 00:10:13.799 "base_bdevs_list": [ 00:10:13.799 { 00:10:13.799 "name": "BaseBdev1", 00:10:13.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.799 "is_configured": false, 00:10:13.799 "data_offset": 0, 00:10:13.799 "data_size": 0 00:10:13.799 }, 00:10:13.799 { 00:10:13.799 "name": null, 00:10:13.799 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:13.799 "is_configured": false, 00:10:13.799 "data_offset": 0, 00:10:13.799 "data_size": 63488 00:10:13.799 }, 00:10:13.799 { 00:10:13.799 "name": "BaseBdev3", 00:10:13.799 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:13.799 "is_configured": true, 00:10:13.799 "data_offset": 2048, 00:10:13.799 "data_size": 63488 00:10:13.799 } 00:10:13.799 ] 00:10:13.799 }' 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.799 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.059 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.059 09:47:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 09:47:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 [2024-11-27 09:47:15.091246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.059 BaseBdev1 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 [ 00:10:14.059 { 00:10:14.059 "name": "BaseBdev1", 00:10:14.059 "aliases": [ 00:10:14.059 "012e1e7a-8d09-4a19-bac5-1cb13f16d118" 00:10:14.059 ], 00:10:14.059 "product_name": "Malloc disk", 00:10:14.059 "block_size": 512, 00:10:14.059 "num_blocks": 65536, 00:10:14.059 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:14.059 "assigned_rate_limits": { 00:10:14.059 "rw_ios_per_sec": 0, 00:10:14.059 "rw_mbytes_per_sec": 0, 00:10:14.059 "r_mbytes_per_sec": 0, 00:10:14.059 "w_mbytes_per_sec": 0 00:10:14.059 }, 00:10:14.059 "claimed": true, 00:10:14.059 "claim_type": "exclusive_write", 00:10:14.059 "zoned": false, 00:10:14.059 "supported_io_types": { 00:10:14.059 "read": true, 00:10:14.059 "write": true, 00:10:14.059 "unmap": true, 00:10:14.059 "flush": true, 00:10:14.059 "reset": true, 00:10:14.059 "nvme_admin": false, 00:10:14.059 "nvme_io": false, 00:10:14.059 "nvme_io_md": false, 00:10:14.059 "write_zeroes": true, 00:10:14.059 "zcopy": true, 00:10:14.059 "get_zone_info": false, 00:10:14.059 "zone_management": false, 00:10:14.059 "zone_append": false, 00:10:14.059 "compare": false, 00:10:14.059 "compare_and_write": false, 00:10:14.059 "abort": true, 00:10:14.059 "seek_hole": false, 00:10:14.059 "seek_data": false, 00:10:14.059 "copy": true, 00:10:14.059 "nvme_iov_md": false 00:10:14.059 }, 00:10:14.059 "memory_domains": [ 00:10:14.059 { 00:10:14.059 "dma_device_id": "system", 00:10:14.059 "dma_device_type": 1 00:10:14.059 }, 00:10:14.059 { 00:10:14.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.059 "dma_device_type": 2 00:10:14.059 } 00:10:14.059 ], 00:10:14.059 "driver_specific": {} 00:10:14.059 } 00:10:14.059 ] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.059 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.059 "name": "Existed_Raid", 00:10:14.059 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:14.059 "strip_size_kb": 64, 00:10:14.059 "state": "configuring", 00:10:14.060 "raid_level": "raid0", 00:10:14.060 "superblock": true, 00:10:14.060 "num_base_bdevs": 3, 00:10:14.060 "num_base_bdevs_discovered": 2, 00:10:14.060 "num_base_bdevs_operational": 3, 00:10:14.060 "base_bdevs_list": [ 00:10:14.060 { 00:10:14.060 "name": "BaseBdev1", 00:10:14.060 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:14.060 "is_configured": true, 00:10:14.060 "data_offset": 2048, 00:10:14.060 "data_size": 63488 00:10:14.060 }, 00:10:14.060 { 00:10:14.060 "name": null, 00:10:14.060 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:14.060 "is_configured": false, 00:10:14.060 "data_offset": 0, 00:10:14.060 "data_size": 63488 00:10:14.060 }, 00:10:14.060 { 00:10:14.060 "name": "BaseBdev3", 00:10:14.060 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:14.060 "is_configured": true, 00:10:14.060 "data_offset": 2048, 00:10:14.060 "data_size": 63488 00:10:14.060 } 00:10:14.060 ] 00:10:14.060 }' 00:10:14.060 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.060 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.629 [2024-11-27 09:47:15.574495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.629 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.630 "name": "Existed_Raid", 00:10:14.630 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:14.630 "strip_size_kb": 64, 00:10:14.630 "state": "configuring", 00:10:14.630 "raid_level": "raid0", 00:10:14.630 "superblock": true, 00:10:14.630 "num_base_bdevs": 3, 00:10:14.630 "num_base_bdevs_discovered": 1, 00:10:14.630 "num_base_bdevs_operational": 3, 00:10:14.630 "base_bdevs_list": [ 00:10:14.630 { 00:10:14.630 "name": "BaseBdev1", 00:10:14.630 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:14.630 "is_configured": true, 00:10:14.630 "data_offset": 2048, 00:10:14.630 "data_size": 63488 00:10:14.630 }, 00:10:14.630 { 00:10:14.630 "name": null, 00:10:14.630 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:14.630 "is_configured": false, 00:10:14.630 "data_offset": 0, 00:10:14.630 "data_size": 63488 00:10:14.630 }, 00:10:14.630 { 00:10:14.630 "name": null, 00:10:14.630 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:14.630 "is_configured": false, 00:10:14.630 "data_offset": 0, 00:10:14.630 "data_size": 63488 00:10:14.630 } 00:10:14.630 ] 00:10:14.630 }' 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.630 09:47:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.199 [2024-11-27 09:47:16.081640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.199 "name": "Existed_Raid", 00:10:15.199 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:15.199 "strip_size_kb": 64, 00:10:15.199 "state": "configuring", 00:10:15.199 "raid_level": "raid0", 00:10:15.199 "superblock": true, 00:10:15.199 "num_base_bdevs": 3, 00:10:15.199 "num_base_bdevs_discovered": 2, 00:10:15.199 "num_base_bdevs_operational": 3, 00:10:15.199 "base_bdevs_list": [ 00:10:15.199 { 00:10:15.199 "name": "BaseBdev1", 00:10:15.199 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:15.199 "is_configured": true, 00:10:15.199 "data_offset": 2048, 00:10:15.199 "data_size": 63488 00:10:15.199 }, 00:10:15.199 { 00:10:15.199 "name": null, 00:10:15.199 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:15.199 "is_configured": false, 00:10:15.199 "data_offset": 0, 00:10:15.199 "data_size": 63488 00:10:15.199 }, 00:10:15.199 { 00:10:15.199 "name": "BaseBdev3", 00:10:15.199 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:15.199 "is_configured": true, 00:10:15.199 "data_offset": 2048, 00:10:15.199 "data_size": 63488 00:10:15.199 } 00:10:15.199 ] 00:10:15.199 }' 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.199 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.458 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.458 [2024-11-27 09:47:16.516930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:15.716 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.716 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:15.716 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.716 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.717 "name": "Existed_Raid", 00:10:15.717 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:15.717 "strip_size_kb": 64, 00:10:15.717 "state": "configuring", 00:10:15.717 "raid_level": "raid0", 00:10:15.717 "superblock": true, 00:10:15.717 "num_base_bdevs": 3, 00:10:15.717 "num_base_bdevs_discovered": 1, 00:10:15.717 "num_base_bdevs_operational": 3, 00:10:15.717 "base_bdevs_list": [ 00:10:15.717 { 00:10:15.717 "name": null, 00:10:15.717 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:15.717 "is_configured": false, 00:10:15.717 "data_offset": 0, 00:10:15.717 "data_size": 63488 00:10:15.717 }, 00:10:15.717 { 00:10:15.717 "name": null, 00:10:15.717 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:15.717 "is_configured": false, 00:10:15.717 "data_offset": 0, 00:10:15.717 "data_size": 63488 00:10:15.717 }, 00:10:15.717 { 00:10:15.717 "name": "BaseBdev3", 00:10:15.717 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:15.717 "is_configured": true, 00:10:15.717 "data_offset": 2048, 00:10:15.717 "data_size": 63488 00:10:15.717 } 00:10:15.717 ] 00:10:15.717 }' 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.717 09:47:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.975 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.975 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.975 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.975 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.975 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.234 [2024-11-27 09:47:17.134083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.234 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.234 "name": "Existed_Raid", 00:10:16.234 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:16.234 "strip_size_kb": 64, 00:10:16.234 "state": "configuring", 00:10:16.234 "raid_level": "raid0", 00:10:16.234 "superblock": true, 00:10:16.234 "num_base_bdevs": 3, 00:10:16.234 "num_base_bdevs_discovered": 2, 00:10:16.234 "num_base_bdevs_operational": 3, 00:10:16.234 "base_bdevs_list": [ 00:10:16.234 { 00:10:16.234 "name": null, 00:10:16.234 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:16.234 "is_configured": false, 00:10:16.234 "data_offset": 0, 00:10:16.234 "data_size": 63488 00:10:16.234 }, 00:10:16.234 { 00:10:16.234 "name": "BaseBdev2", 00:10:16.234 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:16.234 "is_configured": true, 00:10:16.234 "data_offset": 2048, 00:10:16.234 "data_size": 63488 00:10:16.234 }, 00:10:16.234 { 00:10:16.234 "name": "BaseBdev3", 00:10:16.235 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:16.235 "is_configured": true, 00:10:16.235 "data_offset": 2048, 00:10:16.235 "data_size": 63488 00:10:16.235 } 00:10:16.235 ] 00:10:16.235 }' 00:10:16.235 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.235 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.494 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 012e1e7a-8d09-4a19-bac5-1cb13f16d118 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 [2024-11-27 09:47:17.671457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:16.753 NewBaseBdev 00:10:16.753 [2024-11-27 09:47:17.671801] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.753 [2024-11-27 09:47:17.671825] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.753 [2024-11-27 09:47:17.672141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:16.753 [2024-11-27 09:47:17.672309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.753 [2024-11-27 09:47:17.672325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:16.753 [2024-11-27 09:47:17.672501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.753 [ 00:10:16.753 { 00:10:16.753 "name": "NewBaseBdev", 00:10:16.753 "aliases": [ 00:10:16.753 "012e1e7a-8d09-4a19-bac5-1cb13f16d118" 00:10:16.753 ], 00:10:16.753 "product_name": "Malloc disk", 00:10:16.753 "block_size": 512, 00:10:16.753 "num_blocks": 65536, 00:10:16.753 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:16.753 "assigned_rate_limits": { 00:10:16.753 "rw_ios_per_sec": 0, 00:10:16.753 "rw_mbytes_per_sec": 0, 00:10:16.753 "r_mbytes_per_sec": 0, 00:10:16.753 "w_mbytes_per_sec": 0 00:10:16.753 }, 00:10:16.753 "claimed": true, 00:10:16.753 "claim_type": "exclusive_write", 00:10:16.753 "zoned": false, 00:10:16.753 "supported_io_types": { 00:10:16.753 "read": true, 00:10:16.753 "write": true, 00:10:16.753 "unmap": true, 00:10:16.753 "flush": true, 00:10:16.753 "reset": true, 00:10:16.753 "nvme_admin": false, 00:10:16.753 "nvme_io": false, 00:10:16.753 "nvme_io_md": false, 00:10:16.753 "write_zeroes": true, 00:10:16.753 "zcopy": true, 00:10:16.753 "get_zone_info": false, 00:10:16.753 "zone_management": false, 00:10:16.753 "zone_append": false, 00:10:16.753 "compare": false, 00:10:16.753 "compare_and_write": false, 00:10:16.753 "abort": true, 00:10:16.753 "seek_hole": false, 00:10:16.753 "seek_data": false, 00:10:16.753 "copy": true, 00:10:16.753 "nvme_iov_md": false 00:10:16.753 }, 00:10:16.753 "memory_domains": [ 00:10:16.753 { 00:10:16.753 "dma_device_id": "system", 00:10:16.753 "dma_device_type": 1 00:10:16.753 }, 00:10:16.753 { 00:10:16.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.753 "dma_device_type": 2 00:10:16.753 } 00:10:16.753 ], 00:10:16.753 "driver_specific": {} 00:10:16.753 } 00:10:16.753 ] 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.753 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.754 "name": "Existed_Raid", 00:10:16.754 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:16.754 "strip_size_kb": 64, 00:10:16.754 "state": "online", 00:10:16.754 "raid_level": "raid0", 00:10:16.754 "superblock": true, 00:10:16.754 "num_base_bdevs": 3, 00:10:16.754 "num_base_bdevs_discovered": 3, 00:10:16.754 "num_base_bdevs_operational": 3, 00:10:16.754 "base_bdevs_list": [ 00:10:16.754 { 00:10:16.754 "name": "NewBaseBdev", 00:10:16.754 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:16.754 "is_configured": true, 00:10:16.754 "data_offset": 2048, 00:10:16.754 "data_size": 63488 00:10:16.754 }, 00:10:16.754 { 00:10:16.754 "name": "BaseBdev2", 00:10:16.754 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:16.754 "is_configured": true, 00:10:16.754 "data_offset": 2048, 00:10:16.754 "data_size": 63488 00:10:16.754 }, 00:10:16.754 { 00:10:16.754 "name": "BaseBdev3", 00:10:16.754 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:16.754 "is_configured": true, 00:10:16.754 "data_offset": 2048, 00:10:16.754 "data_size": 63488 00:10:16.754 } 00:10:16.754 ] 00:10:16.754 }' 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.754 09:47:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.321 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.322 [2024-11-27 09:47:18.163006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.322 "name": "Existed_Raid", 00:10:17.322 "aliases": [ 00:10:17.322 "a592d877-2619-404b-a3c6-ee94f8b153a7" 00:10:17.322 ], 00:10:17.322 "product_name": "Raid Volume", 00:10:17.322 "block_size": 512, 00:10:17.322 "num_blocks": 190464, 00:10:17.322 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:17.322 "assigned_rate_limits": { 00:10:17.322 "rw_ios_per_sec": 0, 00:10:17.322 "rw_mbytes_per_sec": 0, 00:10:17.322 "r_mbytes_per_sec": 0, 00:10:17.322 "w_mbytes_per_sec": 0 00:10:17.322 }, 00:10:17.322 "claimed": false, 00:10:17.322 "zoned": false, 00:10:17.322 "supported_io_types": { 00:10:17.322 "read": true, 00:10:17.322 "write": true, 00:10:17.322 "unmap": true, 00:10:17.322 "flush": true, 00:10:17.322 "reset": true, 00:10:17.322 "nvme_admin": false, 00:10:17.322 "nvme_io": false, 00:10:17.322 "nvme_io_md": false, 00:10:17.322 "write_zeroes": true, 00:10:17.322 "zcopy": false, 00:10:17.322 "get_zone_info": false, 00:10:17.322 "zone_management": false, 00:10:17.322 "zone_append": false, 00:10:17.322 "compare": false, 00:10:17.322 "compare_and_write": false, 00:10:17.322 "abort": false, 00:10:17.322 "seek_hole": false, 00:10:17.322 "seek_data": false, 00:10:17.322 "copy": false, 00:10:17.322 "nvme_iov_md": false 00:10:17.322 }, 00:10:17.322 "memory_domains": [ 00:10:17.322 { 00:10:17.322 "dma_device_id": "system", 00:10:17.322 "dma_device_type": 1 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.322 "dma_device_type": 2 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "dma_device_id": "system", 00:10:17.322 "dma_device_type": 1 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.322 "dma_device_type": 2 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "dma_device_id": "system", 00:10:17.322 "dma_device_type": 1 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.322 "dma_device_type": 2 00:10:17.322 } 00:10:17.322 ], 00:10:17.322 "driver_specific": { 00:10:17.322 "raid": { 00:10:17.322 "uuid": "a592d877-2619-404b-a3c6-ee94f8b153a7", 00:10:17.322 "strip_size_kb": 64, 00:10:17.322 "state": "online", 00:10:17.322 "raid_level": "raid0", 00:10:17.322 "superblock": true, 00:10:17.322 "num_base_bdevs": 3, 00:10:17.322 "num_base_bdevs_discovered": 3, 00:10:17.322 "num_base_bdevs_operational": 3, 00:10:17.322 "base_bdevs_list": [ 00:10:17.322 { 00:10:17.322 "name": "NewBaseBdev", 00:10:17.322 "uuid": "012e1e7a-8d09-4a19-bac5-1cb13f16d118", 00:10:17.322 "is_configured": true, 00:10:17.322 "data_offset": 2048, 00:10:17.322 "data_size": 63488 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "name": "BaseBdev2", 00:10:17.322 "uuid": "00592072-be59-48f3-aac2-799d50497083", 00:10:17.322 "is_configured": true, 00:10:17.322 "data_offset": 2048, 00:10:17.322 "data_size": 63488 00:10:17.322 }, 00:10:17.322 { 00:10:17.322 "name": "BaseBdev3", 00:10:17.322 "uuid": "cf42b017-6b76-4fa7-85ba-b6f7f88e3fa1", 00:10:17.322 "is_configured": true, 00:10:17.322 "data_offset": 2048, 00:10:17.322 "data_size": 63488 00:10:17.322 } 00:10:17.322 ] 00:10:17.322 } 00:10:17.322 } 00:10:17.322 }' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:17.322 BaseBdev2 00:10:17.322 BaseBdev3' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.322 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.323 [2024-11-27 09:47:18.438171] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:17.323 [2024-11-27 09:47:18.438246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.323 [2024-11-27 09:47:18.438347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.323 [2024-11-27 09:47:18.438408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.323 [2024-11-27 09:47:18.438422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64707 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64707 ']' 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64707 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:17.323 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.581 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64707 00:10:17.581 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.581 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.581 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64707' 00:10:17.581 killing process with pid 64707 00:10:17.581 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64707 00:10:17.581 [2024-11-27 09:47:18.488283] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.581 09:47:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64707 00:10:17.838 [2024-11-27 09:47:18.814841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:19.217 09:47:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:19.217 00:10:19.217 real 0m10.687s 00:10:19.217 user 0m16.615s 00:10:19.217 sys 0m2.036s 00:10:19.217 ************************************ 00:10:19.217 END TEST raid_state_function_test_sb 00:10:19.217 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.217 09:47:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.217 ************************************ 00:10:19.217 09:47:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:19.217 09:47:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:19.217 09:47:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.217 09:47:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:19.217 ************************************ 00:10:19.217 START TEST raid_superblock_test 00:10:19.217 ************************************ 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:19.217 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65331 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65331 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65331 ']' 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.218 09:47:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.218 [2024-11-27 09:47:20.201505] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:19.218 [2024-11-27 09:47:20.201764] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65331 ] 00:10:19.477 [2024-11-27 09:47:20.380618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.477 [2024-11-27 09:47:20.511298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.737 [2024-11-27 09:47:20.746991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.737 [2024-11-27 09:47:20.747102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.998 malloc1 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.998 [2024-11-27 09:47:21.087052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.998 [2024-11-27 09:47:21.087187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.998 [2024-11-27 09:47:21.087218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:19.998 [2024-11-27 09:47:21.087229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.998 [2024-11-27 09:47:21.089650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.998 [2024-11-27 09:47:21.089689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.998 pt1 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.998 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 malloc2 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 [2024-11-27 09:47:21.146319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.259 [2024-11-27 09:47:21.146423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.259 [2024-11-27 09:47:21.146468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:20.259 [2024-11-27 09:47:21.146495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.259 [2024-11-27 09:47:21.148946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.259 [2024-11-27 09:47:21.149035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.259 pt2 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 malloc3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 [2024-11-27 09:47:21.222037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.259 [2024-11-27 09:47:21.222144] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.259 [2024-11-27 09:47:21.222185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:20.259 [2024-11-27 09:47:21.222215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.259 [2024-11-27 09:47:21.224599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.259 [2024-11-27 09:47:21.224687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.259 pt3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 [2024-11-27 09:47:21.234073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.259 [2024-11-27 09:47:21.236165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.259 [2024-11-27 09:47:21.236269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.259 [2024-11-27 09:47:21.236460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:20.259 [2024-11-27 09:47:21.236510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:20.259 [2024-11-27 09:47:21.236781] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:20.259 [2024-11-27 09:47:21.236986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:20.259 [2024-11-27 09:47:21.237037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:20.259 [2024-11-27 09:47:21.237234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.259 "name": "raid_bdev1", 00:10:20.259 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:20.259 "strip_size_kb": 64, 00:10:20.259 "state": "online", 00:10:20.259 "raid_level": "raid0", 00:10:20.259 "superblock": true, 00:10:20.259 "num_base_bdevs": 3, 00:10:20.259 "num_base_bdevs_discovered": 3, 00:10:20.259 "num_base_bdevs_operational": 3, 00:10:20.259 "base_bdevs_list": [ 00:10:20.259 { 00:10:20.259 "name": "pt1", 00:10:20.259 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.259 "is_configured": true, 00:10:20.259 "data_offset": 2048, 00:10:20.259 "data_size": 63488 00:10:20.259 }, 00:10:20.259 { 00:10:20.259 "name": "pt2", 00:10:20.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.259 "is_configured": true, 00:10:20.259 "data_offset": 2048, 00:10:20.259 "data_size": 63488 00:10:20.259 }, 00:10:20.259 { 00:10:20.259 "name": "pt3", 00:10:20.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.259 "is_configured": true, 00:10:20.259 "data_offset": 2048, 00:10:20.259 "data_size": 63488 00:10:20.259 } 00:10:20.259 ] 00:10:20.259 }' 00:10:20.259 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.260 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:20.830 [2024-11-27 09:47:21.729558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.830 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:20.830 "name": "raid_bdev1", 00:10:20.830 "aliases": [ 00:10:20.830 "28105509-259d-41b9-bd93-e5959e993194" 00:10:20.830 ], 00:10:20.830 "product_name": "Raid Volume", 00:10:20.830 "block_size": 512, 00:10:20.830 "num_blocks": 190464, 00:10:20.830 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:20.830 "assigned_rate_limits": { 00:10:20.830 "rw_ios_per_sec": 0, 00:10:20.830 "rw_mbytes_per_sec": 0, 00:10:20.830 "r_mbytes_per_sec": 0, 00:10:20.830 "w_mbytes_per_sec": 0 00:10:20.830 }, 00:10:20.830 "claimed": false, 00:10:20.830 "zoned": false, 00:10:20.830 "supported_io_types": { 00:10:20.830 "read": true, 00:10:20.830 "write": true, 00:10:20.830 "unmap": true, 00:10:20.830 "flush": true, 00:10:20.830 "reset": true, 00:10:20.830 "nvme_admin": false, 00:10:20.830 "nvme_io": false, 00:10:20.830 "nvme_io_md": false, 00:10:20.830 "write_zeroes": true, 00:10:20.830 "zcopy": false, 00:10:20.830 "get_zone_info": false, 00:10:20.830 "zone_management": false, 00:10:20.830 "zone_append": false, 00:10:20.830 "compare": false, 00:10:20.830 "compare_and_write": false, 00:10:20.830 "abort": false, 00:10:20.830 "seek_hole": false, 00:10:20.830 "seek_data": false, 00:10:20.830 "copy": false, 00:10:20.830 "nvme_iov_md": false 00:10:20.830 }, 00:10:20.830 "memory_domains": [ 00:10:20.830 { 00:10:20.830 "dma_device_id": "system", 00:10:20.830 "dma_device_type": 1 00:10:20.830 }, 00:10:20.830 { 00:10:20.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.830 "dma_device_type": 2 00:10:20.830 }, 00:10:20.830 { 00:10:20.830 "dma_device_id": "system", 00:10:20.830 "dma_device_type": 1 00:10:20.830 }, 00:10:20.830 { 00:10:20.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.831 "dma_device_type": 2 00:10:20.831 }, 00:10:20.831 { 00:10:20.831 "dma_device_id": "system", 00:10:20.831 "dma_device_type": 1 00:10:20.831 }, 00:10:20.831 { 00:10:20.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.831 "dma_device_type": 2 00:10:20.831 } 00:10:20.831 ], 00:10:20.831 "driver_specific": { 00:10:20.831 "raid": { 00:10:20.831 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:20.831 "strip_size_kb": 64, 00:10:20.831 "state": "online", 00:10:20.831 "raid_level": "raid0", 00:10:20.831 "superblock": true, 00:10:20.831 "num_base_bdevs": 3, 00:10:20.831 "num_base_bdevs_discovered": 3, 00:10:20.831 "num_base_bdevs_operational": 3, 00:10:20.831 "base_bdevs_list": [ 00:10:20.831 { 00:10:20.831 "name": "pt1", 00:10:20.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.831 "is_configured": true, 00:10:20.831 "data_offset": 2048, 00:10:20.831 "data_size": 63488 00:10:20.831 }, 00:10:20.831 { 00:10:20.831 "name": "pt2", 00:10:20.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.831 "is_configured": true, 00:10:20.831 "data_offset": 2048, 00:10:20.831 "data_size": 63488 00:10:20.831 }, 00:10:20.831 { 00:10:20.831 "name": "pt3", 00:10:20.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.831 "is_configured": true, 00:10:20.831 "data_offset": 2048, 00:10:20.831 "data_size": 63488 00:10:20.831 } 00:10:20.831 ] 00:10:20.831 } 00:10:20.831 } 00:10:20.831 }' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.831 pt2 00:10:20.831 pt3' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.831 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.101 09:47:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:21.101 [2024-11-27 09:47:22.040930] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=28105509-259d-41b9-bd93-e5959e993194 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 28105509-259d-41b9-bd93-e5959e993194 ']' 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 [2024-11-27 09:47:22.088538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.101 [2024-11-27 09:47:22.088568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.101 [2024-11-27 09:47:22.088659] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.101 [2024-11-27 09:47:22.088740] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.101 [2024-11-27 09:47:22.088751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.101 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.102 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:21.102 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:21.102 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.102 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.102 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.382 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:21.382 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:21.382 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.383 [2024-11-27 09:47:22.240368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:21.383 [2024-11-27 09:47:22.242649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:21.383 [2024-11-27 09:47:22.242751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:21.383 [2024-11-27 09:47:22.242845] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:21.383 [2024-11-27 09:47:22.242941] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:21.383 [2024-11-27 09:47:22.243008] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:21.383 [2024-11-27 09:47:22.243095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.383 [2024-11-27 09:47:22.243130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:21.383 request: 00:10:21.383 { 00:10:21.383 "name": "raid_bdev1", 00:10:21.383 "raid_level": "raid0", 00:10:21.383 "base_bdevs": [ 00:10:21.383 "malloc1", 00:10:21.383 "malloc2", 00:10:21.383 "malloc3" 00:10:21.383 ], 00:10:21.383 "strip_size_kb": 64, 00:10:21.383 "superblock": false, 00:10:21.383 "method": "bdev_raid_create", 00:10:21.383 "req_id": 1 00:10:21.383 } 00:10:21.383 Got JSON-RPC error response 00:10:21.383 response: 00:10:21.383 { 00:10:21.383 "code": -17, 00:10:21.383 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:21.383 } 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.383 [2024-11-27 09:47:22.304193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.383 [2024-11-27 09:47:22.304300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.383 [2024-11-27 09:47:22.304349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:21.383 [2024-11-27 09:47:22.304380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.383 [2024-11-27 09:47:22.306973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.383 [2024-11-27 09:47:22.307060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.383 [2024-11-27 09:47:22.307187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.383 [2024-11-27 09:47:22.307279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.383 pt1 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.383 "name": "raid_bdev1", 00:10:21.383 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:21.383 "strip_size_kb": 64, 00:10:21.383 "state": "configuring", 00:10:21.383 "raid_level": "raid0", 00:10:21.383 "superblock": true, 00:10:21.383 "num_base_bdevs": 3, 00:10:21.383 "num_base_bdevs_discovered": 1, 00:10:21.383 "num_base_bdevs_operational": 3, 00:10:21.383 "base_bdevs_list": [ 00:10:21.383 { 00:10:21.383 "name": "pt1", 00:10:21.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.383 "is_configured": true, 00:10:21.383 "data_offset": 2048, 00:10:21.383 "data_size": 63488 00:10:21.383 }, 00:10:21.383 { 00:10:21.383 "name": null, 00:10:21.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.383 "is_configured": false, 00:10:21.383 "data_offset": 2048, 00:10:21.383 "data_size": 63488 00:10:21.383 }, 00:10:21.383 { 00:10:21.383 "name": null, 00:10:21.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.383 "is_configured": false, 00:10:21.383 "data_offset": 2048, 00:10:21.383 "data_size": 63488 00:10:21.383 } 00:10:21.383 ] 00:10:21.383 }' 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.383 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.643 [2024-11-27 09:47:22.735534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:21.643 [2024-11-27 09:47:22.735664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.643 [2024-11-27 09:47:22.735724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:21.643 [2024-11-27 09:47:22.735755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.643 [2024-11-27 09:47:22.736315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.643 [2024-11-27 09:47:22.736380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:21.643 [2024-11-27 09:47:22.736523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:21.643 [2024-11-27 09:47:22.736584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.643 pt2 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:21.643 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.644 [2024-11-27 09:47:22.747476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.644 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.904 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.904 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.904 "name": "raid_bdev1", 00:10:21.904 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:21.904 "strip_size_kb": 64, 00:10:21.904 "state": "configuring", 00:10:21.904 "raid_level": "raid0", 00:10:21.904 "superblock": true, 00:10:21.904 "num_base_bdevs": 3, 00:10:21.904 "num_base_bdevs_discovered": 1, 00:10:21.904 "num_base_bdevs_operational": 3, 00:10:21.904 "base_bdevs_list": [ 00:10:21.904 { 00:10:21.904 "name": "pt1", 00:10:21.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.904 "is_configured": true, 00:10:21.904 "data_offset": 2048, 00:10:21.904 "data_size": 63488 00:10:21.904 }, 00:10:21.904 { 00:10:21.904 "name": null, 00:10:21.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.904 "is_configured": false, 00:10:21.904 "data_offset": 0, 00:10:21.904 "data_size": 63488 00:10:21.904 }, 00:10:21.904 { 00:10:21.904 "name": null, 00:10:21.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.904 "is_configured": false, 00:10:21.904 "data_offset": 2048, 00:10:21.904 "data_size": 63488 00:10:21.904 } 00:10:21.904 ] 00:10:21.904 }' 00:10:21.904 09:47:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.904 09:47:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.165 [2024-11-27 09:47:23.210681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.165 [2024-11-27 09:47:23.210835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.165 [2024-11-27 09:47:23.210873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:22.165 [2024-11-27 09:47:23.210906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.165 [2024-11-27 09:47:23.211519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.165 [2024-11-27 09:47:23.211589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.165 [2024-11-27 09:47:23.211695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:22.165 [2024-11-27 09:47:23.211726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:22.165 pt2 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.165 [2024-11-27 09:47:23.218627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.165 [2024-11-27 09:47:23.218678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.165 [2024-11-27 09:47:23.218692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:22.165 [2024-11-27 09:47:23.218702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.165 [2024-11-27 09:47:23.219134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.165 [2024-11-27 09:47:23.219157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.165 [2024-11-27 09:47:23.219225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.165 [2024-11-27 09:47:23.219248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.165 [2024-11-27 09:47:23.219371] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:22.165 [2024-11-27 09:47:23.219383] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:22.165 [2024-11-27 09:47:23.219638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:22.165 [2024-11-27 09:47:23.219815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:22.165 [2024-11-27 09:47:23.219824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:22.165 [2024-11-27 09:47:23.219980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.165 pt3 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.165 "name": "raid_bdev1", 00:10:22.165 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:22.165 "strip_size_kb": 64, 00:10:22.165 "state": "online", 00:10:22.165 "raid_level": "raid0", 00:10:22.165 "superblock": true, 00:10:22.165 "num_base_bdevs": 3, 00:10:22.165 "num_base_bdevs_discovered": 3, 00:10:22.165 "num_base_bdevs_operational": 3, 00:10:22.165 "base_bdevs_list": [ 00:10:22.165 { 00:10:22.165 "name": "pt1", 00:10:22.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.165 "is_configured": true, 00:10:22.165 "data_offset": 2048, 00:10:22.165 "data_size": 63488 00:10:22.165 }, 00:10:22.165 { 00:10:22.165 "name": "pt2", 00:10:22.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.165 "is_configured": true, 00:10:22.165 "data_offset": 2048, 00:10:22.165 "data_size": 63488 00:10:22.165 }, 00:10:22.165 { 00:10:22.165 "name": "pt3", 00:10:22.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.165 "is_configured": true, 00:10:22.165 "data_offset": 2048, 00:10:22.165 "data_size": 63488 00:10:22.165 } 00:10:22.165 ] 00:10:22.165 }' 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.165 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.736 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.737 [2024-11-27 09:47:23.614394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:22.737 "name": "raid_bdev1", 00:10:22.737 "aliases": [ 00:10:22.737 "28105509-259d-41b9-bd93-e5959e993194" 00:10:22.737 ], 00:10:22.737 "product_name": "Raid Volume", 00:10:22.737 "block_size": 512, 00:10:22.737 "num_blocks": 190464, 00:10:22.737 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:22.737 "assigned_rate_limits": { 00:10:22.737 "rw_ios_per_sec": 0, 00:10:22.737 "rw_mbytes_per_sec": 0, 00:10:22.737 "r_mbytes_per_sec": 0, 00:10:22.737 "w_mbytes_per_sec": 0 00:10:22.737 }, 00:10:22.737 "claimed": false, 00:10:22.737 "zoned": false, 00:10:22.737 "supported_io_types": { 00:10:22.737 "read": true, 00:10:22.737 "write": true, 00:10:22.737 "unmap": true, 00:10:22.737 "flush": true, 00:10:22.737 "reset": true, 00:10:22.737 "nvme_admin": false, 00:10:22.737 "nvme_io": false, 00:10:22.737 "nvme_io_md": false, 00:10:22.737 "write_zeroes": true, 00:10:22.737 "zcopy": false, 00:10:22.737 "get_zone_info": false, 00:10:22.737 "zone_management": false, 00:10:22.737 "zone_append": false, 00:10:22.737 "compare": false, 00:10:22.737 "compare_and_write": false, 00:10:22.737 "abort": false, 00:10:22.737 "seek_hole": false, 00:10:22.737 "seek_data": false, 00:10:22.737 "copy": false, 00:10:22.737 "nvme_iov_md": false 00:10:22.737 }, 00:10:22.737 "memory_domains": [ 00:10:22.737 { 00:10:22.737 "dma_device_id": "system", 00:10:22.737 "dma_device_type": 1 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.737 "dma_device_type": 2 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "dma_device_id": "system", 00:10:22.737 "dma_device_type": 1 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.737 "dma_device_type": 2 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "dma_device_id": "system", 00:10:22.737 "dma_device_type": 1 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.737 "dma_device_type": 2 00:10:22.737 } 00:10:22.737 ], 00:10:22.737 "driver_specific": { 00:10:22.737 "raid": { 00:10:22.737 "uuid": "28105509-259d-41b9-bd93-e5959e993194", 00:10:22.737 "strip_size_kb": 64, 00:10:22.737 "state": "online", 00:10:22.737 "raid_level": "raid0", 00:10:22.737 "superblock": true, 00:10:22.737 "num_base_bdevs": 3, 00:10:22.737 "num_base_bdevs_discovered": 3, 00:10:22.737 "num_base_bdevs_operational": 3, 00:10:22.737 "base_bdevs_list": [ 00:10:22.737 { 00:10:22.737 "name": "pt1", 00:10:22.737 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:22.737 "is_configured": true, 00:10:22.737 "data_offset": 2048, 00:10:22.737 "data_size": 63488 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "name": "pt2", 00:10:22.737 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.737 "is_configured": true, 00:10:22.737 "data_offset": 2048, 00:10:22.737 "data_size": 63488 00:10:22.737 }, 00:10:22.737 { 00:10:22.737 "name": "pt3", 00:10:22.737 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.737 "is_configured": true, 00:10:22.737 "data_offset": 2048, 00:10:22.737 "data_size": 63488 00:10:22.737 } 00:10:22.737 ] 00:10:22.737 } 00:10:22.737 } 00:10:22.737 }' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:22.737 pt2 00:10:22.737 pt3' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.737 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.998 [2024-11-27 09:47:23.881848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 28105509-259d-41b9-bd93-e5959e993194 '!=' 28105509-259d-41b9-bd93-e5959e993194 ']' 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65331 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65331 ']' 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65331 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65331 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65331' 00:10:22.998 killing process with pid 65331 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65331 00:10:22.998 [2024-11-27 09:47:23.969463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.998 [2024-11-27 09:47:23.969636] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.998 09:47:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65331 00:10:22.998 [2024-11-27 09:47:23.969741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.998 [2024-11-27 09:47:23.969792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:23.258 [2024-11-27 09:47:24.304766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.639 09:47:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:24.639 00:10:24.639 real 0m5.431s 00:10:24.639 user 0m7.612s 00:10:24.639 sys 0m0.994s 00:10:24.639 ************************************ 00:10:24.639 END TEST raid_superblock_test 00:10:24.639 ************************************ 00:10:24.639 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.639 09:47:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.639 09:47:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:24.639 09:47:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:24.639 09:47:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.639 09:47:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.639 ************************************ 00:10:24.639 START TEST raid_read_error_test 00:10:24.639 ************************************ 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:24.639 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WULP3Bc7oP 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65584 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65584 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65584 ']' 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.640 09:47:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.640 [2024-11-27 09:47:25.710083] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:24.640 [2024-11-27 09:47:25.710224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65584 ] 00:10:24.900 [2024-11-27 09:47:25.892455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.159 [2024-11-27 09:47:26.033441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.159 [2024-11-27 09:47:26.268991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.159 [2024-11-27 09:47:26.269052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 BaseBdev1_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 true 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [2024-11-27 09:47:26.618941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:25.730 [2024-11-27 09:47:26.619078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.730 [2024-11-27 09:47:26.619126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:25.730 [2024-11-27 09:47:26.619164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.730 [2024-11-27 09:47:26.621752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.730 [2024-11-27 09:47:26.621848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:25.730 BaseBdev1 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 BaseBdev2_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 true 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [2024-11-27 09:47:26.692532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:25.730 [2024-11-27 09:47:26.692656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.730 [2024-11-27 09:47:26.692692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:25.730 [2024-11-27 09:47:26.692722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.730 [2024-11-27 09:47:26.695198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.730 [2024-11-27 09:47:26.695270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:25.730 BaseBdev2 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 BaseBdev3_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 true 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [2024-11-27 09:47:26.777887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:25.730 [2024-11-27 09:47:26.777990] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:25.730 [2024-11-27 09:47:26.778034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:25.730 [2024-11-27 09:47:26.778066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:25.730 [2024-11-27 09:47:26.780521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:25.730 [2024-11-27 09:47:26.780599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:25.730 BaseBdev3 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.730 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.730 [2024-11-27 09:47:26.789964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.730 [2024-11-27 09:47:26.792135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.730 [2024-11-27 09:47:26.792248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:25.730 [2024-11-27 09:47:26.792471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:25.730 [2024-11-27 09:47:26.792487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:25.730 [2024-11-27 09:47:26.792744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:25.730 [2024-11-27 09:47:26.792918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:25.730 [2024-11-27 09:47:26.792933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:25.730 [2024-11-27 09:47:26.793098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.731 "name": "raid_bdev1", 00:10:25.731 "uuid": "2f6b2433-9b9b-4f36-9842-afadd65888cd", 00:10:25.731 "strip_size_kb": 64, 00:10:25.731 "state": "online", 00:10:25.731 "raid_level": "raid0", 00:10:25.731 "superblock": true, 00:10:25.731 "num_base_bdevs": 3, 00:10:25.731 "num_base_bdevs_discovered": 3, 00:10:25.731 "num_base_bdevs_operational": 3, 00:10:25.731 "base_bdevs_list": [ 00:10:25.731 { 00:10:25.731 "name": "BaseBdev1", 00:10:25.731 "uuid": "6ab7ebc7-8759-5df0-8069-c183af03e601", 00:10:25.731 "is_configured": true, 00:10:25.731 "data_offset": 2048, 00:10:25.731 "data_size": 63488 00:10:25.731 }, 00:10:25.731 { 00:10:25.731 "name": "BaseBdev2", 00:10:25.731 "uuid": "70167aac-b5da-55d3-837f-9707f8a659de", 00:10:25.731 "is_configured": true, 00:10:25.731 "data_offset": 2048, 00:10:25.731 "data_size": 63488 00:10:25.731 }, 00:10:25.731 { 00:10:25.731 "name": "BaseBdev3", 00:10:25.731 "uuid": "2cd2b66d-60a8-5fc6-89fe-ced195bfe947", 00:10:25.731 "is_configured": true, 00:10:25.731 "data_offset": 2048, 00:10:25.731 "data_size": 63488 00:10:25.731 } 00:10:25.731 ] 00:10:25.731 }' 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.731 09:47:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.301 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:26.301 09:47:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:26.301 [2024-11-27 09:47:27.326337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.241 "name": "raid_bdev1", 00:10:27.241 "uuid": "2f6b2433-9b9b-4f36-9842-afadd65888cd", 00:10:27.241 "strip_size_kb": 64, 00:10:27.241 "state": "online", 00:10:27.241 "raid_level": "raid0", 00:10:27.241 "superblock": true, 00:10:27.241 "num_base_bdevs": 3, 00:10:27.241 "num_base_bdevs_discovered": 3, 00:10:27.241 "num_base_bdevs_operational": 3, 00:10:27.241 "base_bdevs_list": [ 00:10:27.241 { 00:10:27.241 "name": "BaseBdev1", 00:10:27.241 "uuid": "6ab7ebc7-8759-5df0-8069-c183af03e601", 00:10:27.241 "is_configured": true, 00:10:27.241 "data_offset": 2048, 00:10:27.241 "data_size": 63488 00:10:27.241 }, 00:10:27.241 { 00:10:27.241 "name": "BaseBdev2", 00:10:27.241 "uuid": "70167aac-b5da-55d3-837f-9707f8a659de", 00:10:27.241 "is_configured": true, 00:10:27.241 "data_offset": 2048, 00:10:27.241 "data_size": 63488 00:10:27.241 }, 00:10:27.241 { 00:10:27.241 "name": "BaseBdev3", 00:10:27.241 "uuid": "2cd2b66d-60a8-5fc6-89fe-ced195bfe947", 00:10:27.241 "is_configured": true, 00:10:27.241 "data_offset": 2048, 00:10:27.241 "data_size": 63488 00:10:27.241 } 00:10:27.241 ] 00:10:27.241 }' 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.241 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.809 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.809 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.809 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.809 [2024-11-27 09:47:28.707413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.810 [2024-11-27 09:47:28.707451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.810 { 00:10:27.810 "results": [ 00:10:27.810 { 00:10:27.810 "job": "raid_bdev1", 00:10:27.810 "core_mask": "0x1", 00:10:27.810 "workload": "randrw", 00:10:27.810 "percentage": 50, 00:10:27.810 "status": "finished", 00:10:27.810 "queue_depth": 1, 00:10:27.810 "io_size": 131072, 00:10:27.810 "runtime": 1.381646, 00:10:27.810 "iops": 13620.7103700948, 00:10:27.810 "mibps": 1702.58879626185, 00:10:27.810 "io_failed": 1, 00:10:27.810 "io_timeout": 0, 00:10:27.810 "avg_latency_us": 103.09148773255248, 00:10:27.810 "min_latency_us": 23.58777292576419, 00:10:27.810 "max_latency_us": 1416.6078602620087 00:10:27.810 } 00:10:27.810 ], 00:10:27.810 "core_count": 1 00:10:27.810 } 00:10:27.810 [2024-11-27 09:47:28.710314] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.810 [2024-11-27 09:47:28.710366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.810 [2024-11-27 09:47:28.710408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.810 [2024-11-27 09:47:28.710418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65584 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65584 ']' 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65584 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65584 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65584' 00:10:27.810 killing process with pid 65584 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65584 00:10:27.810 [2024-11-27 09:47:28.760612] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.810 09:47:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65584 00:10:28.070 [2024-11-27 09:47:29.015733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WULP3Bc7oP 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:29.478 00:10:29.478 real 0m4.718s 00:10:29.478 user 0m5.491s 00:10:29.478 sys 0m0.668s 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.478 09:47:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.478 ************************************ 00:10:29.478 END TEST raid_read_error_test 00:10:29.478 ************************************ 00:10:29.478 09:47:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:29.478 09:47:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:29.478 09:47:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.478 09:47:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.478 ************************************ 00:10:29.478 START TEST raid_write_error_test 00:10:29.478 ************************************ 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AxfifaEoJ1 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65730 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65730 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65730 ']' 00:10:29.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:29.478 09:47:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.478 [2024-11-27 09:47:30.502261] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:29.478 [2024-11-27 09:47:30.502418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65730 ] 00:10:29.738 [2024-11-27 09:47:30.680688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.738 [2024-11-27 09:47:30.819158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.997 [2024-11-27 09:47:31.050513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.997 [2024-11-27 09:47:31.050564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.257 BaseBdev1_malloc 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.257 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 true 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 [2024-11-27 09:47:31.402676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:30.518 [2024-11-27 09:47:31.402782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.518 [2024-11-27 09:47:31.402836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:30.518 [2024-11-27 09:47:31.402871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.518 [2024-11-27 09:47:31.405295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.518 [2024-11-27 09:47:31.405374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:30.518 BaseBdev1 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 BaseBdev2_malloc 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 true 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 [2024-11-27 09:47:31.475651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:30.518 [2024-11-27 09:47:31.475764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.518 [2024-11-27 09:47:31.475800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:30.518 [2024-11-27 09:47:31.475832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.518 [2024-11-27 09:47:31.478326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.518 [2024-11-27 09:47:31.478398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:30.518 BaseBdev2 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 BaseBdev3_malloc 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 true 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.518 [2024-11-27 09:47:31.562557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:30.518 [2024-11-27 09:47:31.562669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:30.518 [2024-11-27 09:47:31.562703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:30.518 [2024-11-27 09:47:31.562734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:30.518 [2024-11-27 09:47:31.565218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:30.518 [2024-11-27 09:47:31.565293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:30.518 BaseBdev3 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.518 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.519 [2024-11-27 09:47:31.574628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:30.519 [2024-11-27 09:47:31.576882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.519 [2024-11-27 09:47:31.577012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.519 [2024-11-27 09:47:31.577265] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:30.519 [2024-11-27 09:47:31.577316] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:30.519 [2024-11-27 09:47:31.577601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:30.519 [2024-11-27 09:47:31.577814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:30.519 [2024-11-27 09:47:31.577862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:30.519 [2024-11-27 09:47:31.578047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.519 "name": "raid_bdev1", 00:10:30.519 "uuid": "c1db124f-c438-477a-ba86-b69ae263d17d", 00:10:30.519 "strip_size_kb": 64, 00:10:30.519 "state": "online", 00:10:30.519 "raid_level": "raid0", 00:10:30.519 "superblock": true, 00:10:30.519 "num_base_bdevs": 3, 00:10:30.519 "num_base_bdevs_discovered": 3, 00:10:30.519 "num_base_bdevs_operational": 3, 00:10:30.519 "base_bdevs_list": [ 00:10:30.519 { 00:10:30.519 "name": "BaseBdev1", 00:10:30.519 "uuid": "b8d30236-f5e8-5813-9516-498b03922e2c", 00:10:30.519 "is_configured": true, 00:10:30.519 "data_offset": 2048, 00:10:30.519 "data_size": 63488 00:10:30.519 }, 00:10:30.519 { 00:10:30.519 "name": "BaseBdev2", 00:10:30.519 "uuid": "101290a5-eba3-5a43-896e-e1b2836ded33", 00:10:30.519 "is_configured": true, 00:10:30.519 "data_offset": 2048, 00:10:30.519 "data_size": 63488 00:10:30.519 }, 00:10:30.519 { 00:10:30.519 "name": "BaseBdev3", 00:10:30.519 "uuid": "748650db-7546-5853-9e88-9f65f1596ca0", 00:10:30.519 "is_configured": true, 00:10:30.519 "data_offset": 2048, 00:10:30.519 "data_size": 63488 00:10:30.519 } 00:10:30.519 ] 00:10:30.519 }' 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.519 09:47:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.089 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:31.089 09:47:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:31.089 [2024-11-27 09:47:32.115121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.029 "name": "raid_bdev1", 00:10:32.029 "uuid": "c1db124f-c438-477a-ba86-b69ae263d17d", 00:10:32.029 "strip_size_kb": 64, 00:10:32.029 "state": "online", 00:10:32.029 "raid_level": "raid0", 00:10:32.029 "superblock": true, 00:10:32.029 "num_base_bdevs": 3, 00:10:32.029 "num_base_bdevs_discovered": 3, 00:10:32.029 "num_base_bdevs_operational": 3, 00:10:32.029 "base_bdevs_list": [ 00:10:32.029 { 00:10:32.029 "name": "BaseBdev1", 00:10:32.029 "uuid": "b8d30236-f5e8-5813-9516-498b03922e2c", 00:10:32.029 "is_configured": true, 00:10:32.029 "data_offset": 2048, 00:10:32.029 "data_size": 63488 00:10:32.029 }, 00:10:32.029 { 00:10:32.029 "name": "BaseBdev2", 00:10:32.029 "uuid": "101290a5-eba3-5a43-896e-e1b2836ded33", 00:10:32.029 "is_configured": true, 00:10:32.029 "data_offset": 2048, 00:10:32.029 "data_size": 63488 00:10:32.029 }, 00:10:32.029 { 00:10:32.029 "name": "BaseBdev3", 00:10:32.029 "uuid": "748650db-7546-5853-9e88-9f65f1596ca0", 00:10:32.029 "is_configured": true, 00:10:32.029 "data_offset": 2048, 00:10:32.029 "data_size": 63488 00:10:32.029 } 00:10:32.029 ] 00:10:32.029 }' 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.029 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.598 [2024-11-27 09:47:33.463843] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:32.598 [2024-11-27 09:47:33.463946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.598 [2024-11-27 09:47:33.466809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.598 [2024-11-27 09:47:33.466903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.598 [2024-11-27 09:47:33.466954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.598 [2024-11-27 09:47:33.466964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:32.598 { 00:10:32.598 "results": [ 00:10:32.598 { 00:10:32.598 "job": "raid_bdev1", 00:10:32.598 "core_mask": "0x1", 00:10:32.598 "workload": "randrw", 00:10:32.598 "percentage": 50, 00:10:32.598 "status": "finished", 00:10:32.598 "queue_depth": 1, 00:10:32.598 "io_size": 131072, 00:10:32.598 "runtime": 1.349407, 00:10:32.598 "iops": 13655.627990665529, 00:10:32.598 "mibps": 1706.9534988331911, 00:10:32.598 "io_failed": 1, 00:10:32.598 "io_timeout": 0, 00:10:32.598 "avg_latency_us": 102.76021888089417, 00:10:32.598 "min_latency_us": 25.041048034934498, 00:10:32.598 "max_latency_us": 1473.844541484716 00:10:32.598 } 00:10:32.598 ], 00:10:32.598 "core_count": 1 00:10:32.598 } 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65730 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65730 ']' 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65730 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65730 00:10:32.598 killing process with pid 65730 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.598 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65730' 00:10:32.599 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65730 00:10:32.599 [2024-11-27 09:47:33.512756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.599 09:47:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65730 00:10:32.858 [2024-11-27 09:47:33.760950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AxfifaEoJ1 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:34.240 ************************************ 00:10:34.240 END TEST raid_write_error_test 00:10:34.240 ************************************ 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:34.240 00:10:34.240 real 0m4.662s 00:10:34.240 user 0m5.383s 00:10:34.240 sys 0m0.663s 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.240 09:47:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.240 09:47:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:34.240 09:47:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:34.240 09:47:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.240 09:47:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.240 09:47:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.240 ************************************ 00:10:34.240 START TEST raid_state_function_test 00:10:34.240 ************************************ 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:34.240 Process raid pid: 65879 00:10:34.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65879 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65879' 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65879 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65879 ']' 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.240 09:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.240 [2024-11-27 09:47:35.211350] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:34.240 [2024-11-27 09:47:35.211590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.500 [2024-11-27 09:47:35.391621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.500 [2024-11-27 09:47:35.531260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.760 [2024-11-27 09:47:35.770637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.760 [2024-11-27 09:47:35.770689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.020 [2024-11-27 09:47:36.035184] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.020 [2024-11-27 09:47:36.035282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.020 [2024-11-27 09:47:36.035318] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.020 [2024-11-27 09:47:36.035343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.020 [2024-11-27 09:47:36.035367] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.020 [2024-11-27 09:47:36.035397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.020 "name": "Existed_Raid", 00:10:35.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.020 "strip_size_kb": 64, 00:10:35.020 "state": "configuring", 00:10:35.020 "raid_level": "concat", 00:10:35.020 "superblock": false, 00:10:35.020 "num_base_bdevs": 3, 00:10:35.020 "num_base_bdevs_discovered": 0, 00:10:35.020 "num_base_bdevs_operational": 3, 00:10:35.020 "base_bdevs_list": [ 00:10:35.020 { 00:10:35.020 "name": "BaseBdev1", 00:10:35.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.020 "is_configured": false, 00:10:35.020 "data_offset": 0, 00:10:35.020 "data_size": 0 00:10:35.020 }, 00:10:35.020 { 00:10:35.020 "name": "BaseBdev2", 00:10:35.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.020 "is_configured": false, 00:10:35.020 "data_offset": 0, 00:10:35.020 "data_size": 0 00:10:35.020 }, 00:10:35.020 { 00:10:35.020 "name": "BaseBdev3", 00:10:35.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.020 "is_configured": false, 00:10:35.020 "data_offset": 0, 00:10:35.020 "data_size": 0 00:10:35.020 } 00:10:35.020 ] 00:10:35.020 }' 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.020 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 [2024-11-27 09:47:36.502299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.590 [2024-11-27 09:47:36.502384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 [2024-11-27 09:47:36.514275] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.590 [2024-11-27 09:47:36.514365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.590 [2024-11-27 09:47:36.514415] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.590 [2024-11-27 09:47:36.514439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.590 [2024-11-27 09:47:36.514466] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.590 [2024-11-27 09:47:36.514490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 [2024-11-27 09:47:36.567530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.590 BaseBdev1 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 [ 00:10:35.590 { 00:10:35.590 "name": "BaseBdev1", 00:10:35.590 "aliases": [ 00:10:35.590 "e2c716db-fe29-4eee-82fd-5b1e5f591fd8" 00:10:35.590 ], 00:10:35.590 "product_name": "Malloc disk", 00:10:35.590 "block_size": 512, 00:10:35.590 "num_blocks": 65536, 00:10:35.590 "uuid": "e2c716db-fe29-4eee-82fd-5b1e5f591fd8", 00:10:35.590 "assigned_rate_limits": { 00:10:35.590 "rw_ios_per_sec": 0, 00:10:35.590 "rw_mbytes_per_sec": 0, 00:10:35.590 "r_mbytes_per_sec": 0, 00:10:35.590 "w_mbytes_per_sec": 0 00:10:35.590 }, 00:10:35.590 "claimed": true, 00:10:35.590 "claim_type": "exclusive_write", 00:10:35.590 "zoned": false, 00:10:35.590 "supported_io_types": { 00:10:35.590 "read": true, 00:10:35.590 "write": true, 00:10:35.590 "unmap": true, 00:10:35.590 "flush": true, 00:10:35.590 "reset": true, 00:10:35.590 "nvme_admin": false, 00:10:35.590 "nvme_io": false, 00:10:35.590 "nvme_io_md": false, 00:10:35.590 "write_zeroes": true, 00:10:35.590 "zcopy": true, 00:10:35.590 "get_zone_info": false, 00:10:35.590 "zone_management": false, 00:10:35.590 "zone_append": false, 00:10:35.590 "compare": false, 00:10:35.590 "compare_and_write": false, 00:10:35.590 "abort": true, 00:10:35.590 "seek_hole": false, 00:10:35.590 "seek_data": false, 00:10:35.590 "copy": true, 00:10:35.590 "nvme_iov_md": false 00:10:35.590 }, 00:10:35.590 "memory_domains": [ 00:10:35.590 { 00:10:35.590 "dma_device_id": "system", 00:10:35.590 "dma_device_type": 1 00:10:35.590 }, 00:10:35.590 { 00:10:35.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.590 "dma_device_type": 2 00:10:35.590 } 00:10:35.590 ], 00:10:35.590 "driver_specific": {} 00:10:35.590 } 00:10:35.590 ] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.590 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.590 "name": "Existed_Raid", 00:10:35.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.590 "strip_size_kb": 64, 00:10:35.590 "state": "configuring", 00:10:35.590 "raid_level": "concat", 00:10:35.590 "superblock": false, 00:10:35.590 "num_base_bdevs": 3, 00:10:35.590 "num_base_bdevs_discovered": 1, 00:10:35.590 "num_base_bdevs_operational": 3, 00:10:35.591 "base_bdevs_list": [ 00:10:35.591 { 00:10:35.591 "name": "BaseBdev1", 00:10:35.591 "uuid": "e2c716db-fe29-4eee-82fd-5b1e5f591fd8", 00:10:35.591 "is_configured": true, 00:10:35.591 "data_offset": 0, 00:10:35.591 "data_size": 65536 00:10:35.591 }, 00:10:35.591 { 00:10:35.591 "name": "BaseBdev2", 00:10:35.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.591 "is_configured": false, 00:10:35.591 "data_offset": 0, 00:10:35.591 "data_size": 0 00:10:35.591 }, 00:10:35.591 { 00:10:35.591 "name": "BaseBdev3", 00:10:35.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.591 "is_configured": false, 00:10:35.591 "data_offset": 0, 00:10:35.591 "data_size": 0 00:10:35.591 } 00:10:35.591 ] 00:10:35.591 }' 00:10:35.591 09:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.591 09:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 [2024-11-27 09:47:37.030786] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.159 [2024-11-27 09:47:37.030902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 [2024-11-27 09:47:37.042824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.159 [2024-11-27 09:47:37.045203] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.159 [2024-11-27 09:47:37.045292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.159 [2024-11-27 09:47:37.045324] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.159 [2024-11-27 09:47:37.045347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.159 "name": "Existed_Raid", 00:10:36.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.159 "strip_size_kb": 64, 00:10:36.159 "state": "configuring", 00:10:36.159 "raid_level": "concat", 00:10:36.159 "superblock": false, 00:10:36.159 "num_base_bdevs": 3, 00:10:36.159 "num_base_bdevs_discovered": 1, 00:10:36.159 "num_base_bdevs_operational": 3, 00:10:36.159 "base_bdevs_list": [ 00:10:36.159 { 00:10:36.159 "name": "BaseBdev1", 00:10:36.159 "uuid": "e2c716db-fe29-4eee-82fd-5b1e5f591fd8", 00:10:36.159 "is_configured": true, 00:10:36.159 "data_offset": 0, 00:10:36.159 "data_size": 65536 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "name": "BaseBdev2", 00:10:36.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.159 "is_configured": false, 00:10:36.159 "data_offset": 0, 00:10:36.159 "data_size": 0 00:10:36.159 }, 00:10:36.159 { 00:10:36.159 "name": "BaseBdev3", 00:10:36.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.159 "is_configured": false, 00:10:36.159 "data_offset": 0, 00:10:36.159 "data_size": 0 00:10:36.159 } 00:10:36.159 ] 00:10:36.159 }' 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.159 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.418 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:36.418 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.418 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.418 [2024-11-27 09:47:37.485623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.418 BaseBdev2 00:10:36.418 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.418 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.419 [ 00:10:36.419 { 00:10:36.419 "name": "BaseBdev2", 00:10:36.419 "aliases": [ 00:10:36.419 "6ef50cb5-e9c3-4c93-ae9f-093524d00ee2" 00:10:36.419 ], 00:10:36.419 "product_name": "Malloc disk", 00:10:36.419 "block_size": 512, 00:10:36.419 "num_blocks": 65536, 00:10:36.419 "uuid": "6ef50cb5-e9c3-4c93-ae9f-093524d00ee2", 00:10:36.419 "assigned_rate_limits": { 00:10:36.419 "rw_ios_per_sec": 0, 00:10:36.419 "rw_mbytes_per_sec": 0, 00:10:36.419 "r_mbytes_per_sec": 0, 00:10:36.419 "w_mbytes_per_sec": 0 00:10:36.419 }, 00:10:36.419 "claimed": true, 00:10:36.419 "claim_type": "exclusive_write", 00:10:36.419 "zoned": false, 00:10:36.419 "supported_io_types": { 00:10:36.419 "read": true, 00:10:36.419 "write": true, 00:10:36.419 "unmap": true, 00:10:36.419 "flush": true, 00:10:36.419 "reset": true, 00:10:36.419 "nvme_admin": false, 00:10:36.419 "nvme_io": false, 00:10:36.419 "nvme_io_md": false, 00:10:36.419 "write_zeroes": true, 00:10:36.419 "zcopy": true, 00:10:36.419 "get_zone_info": false, 00:10:36.419 "zone_management": false, 00:10:36.419 "zone_append": false, 00:10:36.419 "compare": false, 00:10:36.419 "compare_and_write": false, 00:10:36.419 "abort": true, 00:10:36.419 "seek_hole": false, 00:10:36.419 "seek_data": false, 00:10:36.419 "copy": true, 00:10:36.419 "nvme_iov_md": false 00:10:36.419 }, 00:10:36.419 "memory_domains": [ 00:10:36.419 { 00:10:36.419 "dma_device_id": "system", 00:10:36.419 "dma_device_type": 1 00:10:36.419 }, 00:10:36.419 { 00:10:36.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.419 "dma_device_type": 2 00:10:36.419 } 00:10:36.419 ], 00:10:36.419 "driver_specific": {} 00:10:36.419 } 00:10:36.419 ] 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.419 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.718 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.718 "name": "Existed_Raid", 00:10:36.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.718 "strip_size_kb": 64, 00:10:36.718 "state": "configuring", 00:10:36.718 "raid_level": "concat", 00:10:36.718 "superblock": false, 00:10:36.718 "num_base_bdevs": 3, 00:10:36.718 "num_base_bdevs_discovered": 2, 00:10:36.718 "num_base_bdevs_operational": 3, 00:10:36.718 "base_bdevs_list": [ 00:10:36.718 { 00:10:36.718 "name": "BaseBdev1", 00:10:36.718 "uuid": "e2c716db-fe29-4eee-82fd-5b1e5f591fd8", 00:10:36.718 "is_configured": true, 00:10:36.718 "data_offset": 0, 00:10:36.718 "data_size": 65536 00:10:36.718 }, 00:10:36.718 { 00:10:36.718 "name": "BaseBdev2", 00:10:36.718 "uuid": "6ef50cb5-e9c3-4c93-ae9f-093524d00ee2", 00:10:36.718 "is_configured": true, 00:10:36.718 "data_offset": 0, 00:10:36.718 "data_size": 65536 00:10:36.718 }, 00:10:36.718 { 00:10:36.718 "name": "BaseBdev3", 00:10:36.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.718 "is_configured": false, 00:10:36.718 "data_offset": 0, 00:10:36.718 "data_size": 0 00:10:36.719 } 00:10:36.719 ] 00:10:36.719 }' 00:10:36.719 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.719 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.989 09:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.989 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.989 09:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.989 [2024-11-27 09:47:38.017705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.989 [2024-11-27 09:47:38.017874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.989 [2024-11-27 09:47:38.017907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:36.989 [2024-11-27 09:47:38.018278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:36.989 [2024-11-27 09:47:38.018531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.989 [2024-11-27 09:47:38.018575] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.989 [2024-11-27 09:47:38.018943] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.989 BaseBdev3 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.989 [ 00:10:36.989 { 00:10:36.989 "name": "BaseBdev3", 00:10:36.989 "aliases": [ 00:10:36.989 "662295ff-1c2a-4a49-bbaf-c36f601ccf75" 00:10:36.989 ], 00:10:36.989 "product_name": "Malloc disk", 00:10:36.989 "block_size": 512, 00:10:36.989 "num_blocks": 65536, 00:10:36.989 "uuid": "662295ff-1c2a-4a49-bbaf-c36f601ccf75", 00:10:36.989 "assigned_rate_limits": { 00:10:36.989 "rw_ios_per_sec": 0, 00:10:36.989 "rw_mbytes_per_sec": 0, 00:10:36.989 "r_mbytes_per_sec": 0, 00:10:36.989 "w_mbytes_per_sec": 0 00:10:36.989 }, 00:10:36.989 "claimed": true, 00:10:36.989 "claim_type": "exclusive_write", 00:10:36.989 "zoned": false, 00:10:36.989 "supported_io_types": { 00:10:36.989 "read": true, 00:10:36.989 "write": true, 00:10:36.989 "unmap": true, 00:10:36.989 "flush": true, 00:10:36.989 "reset": true, 00:10:36.989 "nvme_admin": false, 00:10:36.989 "nvme_io": false, 00:10:36.989 "nvme_io_md": false, 00:10:36.989 "write_zeroes": true, 00:10:36.989 "zcopy": true, 00:10:36.989 "get_zone_info": false, 00:10:36.989 "zone_management": false, 00:10:36.989 "zone_append": false, 00:10:36.989 "compare": false, 00:10:36.989 "compare_and_write": false, 00:10:36.989 "abort": true, 00:10:36.989 "seek_hole": false, 00:10:36.989 "seek_data": false, 00:10:36.989 "copy": true, 00:10:36.989 "nvme_iov_md": false 00:10:36.989 }, 00:10:36.989 "memory_domains": [ 00:10:36.989 { 00:10:36.989 "dma_device_id": "system", 00:10:36.989 "dma_device_type": 1 00:10:36.989 }, 00:10:36.989 { 00:10:36.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.989 "dma_device_type": 2 00:10:36.989 } 00:10:36.989 ], 00:10:36.989 "driver_specific": {} 00:10:36.989 } 00:10:36.989 ] 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.989 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.990 "name": "Existed_Raid", 00:10:36.990 "uuid": "17c4946f-17e9-4747-9261-fae7acd0eb3c", 00:10:36.990 "strip_size_kb": 64, 00:10:36.990 "state": "online", 00:10:36.990 "raid_level": "concat", 00:10:36.990 "superblock": false, 00:10:36.990 "num_base_bdevs": 3, 00:10:36.990 "num_base_bdevs_discovered": 3, 00:10:36.990 "num_base_bdevs_operational": 3, 00:10:36.990 "base_bdevs_list": [ 00:10:36.990 { 00:10:36.990 "name": "BaseBdev1", 00:10:36.990 "uuid": "e2c716db-fe29-4eee-82fd-5b1e5f591fd8", 00:10:36.990 "is_configured": true, 00:10:36.990 "data_offset": 0, 00:10:36.990 "data_size": 65536 00:10:36.990 }, 00:10:36.990 { 00:10:36.990 "name": "BaseBdev2", 00:10:36.990 "uuid": "6ef50cb5-e9c3-4c93-ae9f-093524d00ee2", 00:10:36.990 "is_configured": true, 00:10:36.990 "data_offset": 0, 00:10:36.990 "data_size": 65536 00:10:36.990 }, 00:10:36.990 { 00:10:36.990 "name": "BaseBdev3", 00:10:36.990 "uuid": "662295ff-1c2a-4a49-bbaf-c36f601ccf75", 00:10:36.990 "is_configured": true, 00:10:36.990 "data_offset": 0, 00:10:36.990 "data_size": 65536 00:10:36.990 } 00:10:36.990 ] 00:10:36.990 }' 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.990 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.560 [2024-11-27 09:47:38.497323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.560 "name": "Existed_Raid", 00:10:37.560 "aliases": [ 00:10:37.560 "17c4946f-17e9-4747-9261-fae7acd0eb3c" 00:10:37.560 ], 00:10:37.560 "product_name": "Raid Volume", 00:10:37.560 "block_size": 512, 00:10:37.560 "num_blocks": 196608, 00:10:37.560 "uuid": "17c4946f-17e9-4747-9261-fae7acd0eb3c", 00:10:37.560 "assigned_rate_limits": { 00:10:37.560 "rw_ios_per_sec": 0, 00:10:37.560 "rw_mbytes_per_sec": 0, 00:10:37.560 "r_mbytes_per_sec": 0, 00:10:37.560 "w_mbytes_per_sec": 0 00:10:37.560 }, 00:10:37.560 "claimed": false, 00:10:37.560 "zoned": false, 00:10:37.560 "supported_io_types": { 00:10:37.560 "read": true, 00:10:37.560 "write": true, 00:10:37.560 "unmap": true, 00:10:37.560 "flush": true, 00:10:37.560 "reset": true, 00:10:37.560 "nvme_admin": false, 00:10:37.560 "nvme_io": false, 00:10:37.560 "nvme_io_md": false, 00:10:37.560 "write_zeroes": true, 00:10:37.560 "zcopy": false, 00:10:37.560 "get_zone_info": false, 00:10:37.560 "zone_management": false, 00:10:37.560 "zone_append": false, 00:10:37.560 "compare": false, 00:10:37.560 "compare_and_write": false, 00:10:37.560 "abort": false, 00:10:37.560 "seek_hole": false, 00:10:37.560 "seek_data": false, 00:10:37.560 "copy": false, 00:10:37.560 "nvme_iov_md": false 00:10:37.560 }, 00:10:37.560 "memory_domains": [ 00:10:37.560 { 00:10:37.560 "dma_device_id": "system", 00:10:37.560 "dma_device_type": 1 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.560 "dma_device_type": 2 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "dma_device_id": "system", 00:10:37.560 "dma_device_type": 1 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.560 "dma_device_type": 2 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "dma_device_id": "system", 00:10:37.560 "dma_device_type": 1 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.560 "dma_device_type": 2 00:10:37.560 } 00:10:37.560 ], 00:10:37.560 "driver_specific": { 00:10:37.560 "raid": { 00:10:37.560 "uuid": "17c4946f-17e9-4747-9261-fae7acd0eb3c", 00:10:37.560 "strip_size_kb": 64, 00:10:37.560 "state": "online", 00:10:37.560 "raid_level": "concat", 00:10:37.560 "superblock": false, 00:10:37.560 "num_base_bdevs": 3, 00:10:37.560 "num_base_bdevs_discovered": 3, 00:10:37.560 "num_base_bdevs_operational": 3, 00:10:37.560 "base_bdevs_list": [ 00:10:37.560 { 00:10:37.560 "name": "BaseBdev1", 00:10:37.560 "uuid": "e2c716db-fe29-4eee-82fd-5b1e5f591fd8", 00:10:37.560 "is_configured": true, 00:10:37.560 "data_offset": 0, 00:10:37.560 "data_size": 65536 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "name": "BaseBdev2", 00:10:37.560 "uuid": "6ef50cb5-e9c3-4c93-ae9f-093524d00ee2", 00:10:37.560 "is_configured": true, 00:10:37.560 "data_offset": 0, 00:10:37.560 "data_size": 65536 00:10:37.560 }, 00:10:37.560 { 00:10:37.560 "name": "BaseBdev3", 00:10:37.560 "uuid": "662295ff-1c2a-4a49-bbaf-c36f601ccf75", 00:10:37.560 "is_configured": true, 00:10:37.560 "data_offset": 0, 00:10:37.560 "data_size": 65536 00:10:37.560 } 00:10:37.560 ] 00:10:37.560 } 00:10:37.560 } 00:10:37.560 }' 00:10:37.560 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.561 BaseBdev2 00:10:37.561 BaseBdev3' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.561 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 [2024-11-27 09:47:38.740555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.820 [2024-11-27 09:47:38.740630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.820 [2024-11-27 09:47:38.740715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.820 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.820 "name": "Existed_Raid", 00:10:37.820 "uuid": "17c4946f-17e9-4747-9261-fae7acd0eb3c", 00:10:37.820 "strip_size_kb": 64, 00:10:37.820 "state": "offline", 00:10:37.820 "raid_level": "concat", 00:10:37.820 "superblock": false, 00:10:37.820 "num_base_bdevs": 3, 00:10:37.820 "num_base_bdevs_discovered": 2, 00:10:37.821 "num_base_bdevs_operational": 2, 00:10:37.821 "base_bdevs_list": [ 00:10:37.821 { 00:10:37.821 "name": null, 00:10:37.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.821 "is_configured": false, 00:10:37.821 "data_offset": 0, 00:10:37.821 "data_size": 65536 00:10:37.821 }, 00:10:37.821 { 00:10:37.821 "name": "BaseBdev2", 00:10:37.821 "uuid": "6ef50cb5-e9c3-4c93-ae9f-093524d00ee2", 00:10:37.821 "is_configured": true, 00:10:37.821 "data_offset": 0, 00:10:37.821 "data_size": 65536 00:10:37.821 }, 00:10:37.821 { 00:10:37.821 "name": "BaseBdev3", 00:10:37.821 "uuid": "662295ff-1c2a-4a49-bbaf-c36f601ccf75", 00:10:37.821 "is_configured": true, 00:10:37.821 "data_offset": 0, 00:10:37.821 "data_size": 65536 00:10:37.821 } 00:10:37.821 ] 00:10:37.821 }' 00:10:37.821 09:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.821 09:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.389 [2024-11-27 09:47:39.315717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.389 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.389 [2024-11-27 09:47:39.470784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.389 [2024-11-27 09:47:39.470895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 BaseBdev2 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.648 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.648 [ 00:10:38.648 { 00:10:38.648 "name": "BaseBdev2", 00:10:38.648 "aliases": [ 00:10:38.648 "00fe6274-45f2-409a-9aa0-2f9574698f0a" 00:10:38.648 ], 00:10:38.648 "product_name": "Malloc disk", 00:10:38.648 "block_size": 512, 00:10:38.648 "num_blocks": 65536, 00:10:38.648 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:38.648 "assigned_rate_limits": { 00:10:38.648 "rw_ios_per_sec": 0, 00:10:38.648 "rw_mbytes_per_sec": 0, 00:10:38.648 "r_mbytes_per_sec": 0, 00:10:38.648 "w_mbytes_per_sec": 0 00:10:38.649 }, 00:10:38.649 "claimed": false, 00:10:38.649 "zoned": false, 00:10:38.649 "supported_io_types": { 00:10:38.649 "read": true, 00:10:38.649 "write": true, 00:10:38.649 "unmap": true, 00:10:38.649 "flush": true, 00:10:38.649 "reset": true, 00:10:38.649 "nvme_admin": false, 00:10:38.649 "nvme_io": false, 00:10:38.649 "nvme_io_md": false, 00:10:38.649 "write_zeroes": true, 00:10:38.649 "zcopy": true, 00:10:38.649 "get_zone_info": false, 00:10:38.649 "zone_management": false, 00:10:38.649 "zone_append": false, 00:10:38.649 "compare": false, 00:10:38.649 "compare_and_write": false, 00:10:38.649 "abort": true, 00:10:38.649 "seek_hole": false, 00:10:38.649 "seek_data": false, 00:10:38.649 "copy": true, 00:10:38.649 "nvme_iov_md": false 00:10:38.649 }, 00:10:38.649 "memory_domains": [ 00:10:38.649 { 00:10:38.649 "dma_device_id": "system", 00:10:38.649 "dma_device_type": 1 00:10:38.649 }, 00:10:38.649 { 00:10:38.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.649 "dma_device_type": 2 00:10:38.649 } 00:10:38.649 ], 00:10:38.649 "driver_specific": {} 00:10:38.649 } 00:10:38.649 ] 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.649 BaseBdev3 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.649 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 [ 00:10:38.910 { 00:10:38.910 "name": "BaseBdev3", 00:10:38.910 "aliases": [ 00:10:38.910 "3adc3c96-a184-4a27-98a2-918a43de908d" 00:10:38.910 ], 00:10:38.910 "product_name": "Malloc disk", 00:10:38.910 "block_size": 512, 00:10:38.910 "num_blocks": 65536, 00:10:38.910 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:38.910 "assigned_rate_limits": { 00:10:38.910 "rw_ios_per_sec": 0, 00:10:38.910 "rw_mbytes_per_sec": 0, 00:10:38.910 "r_mbytes_per_sec": 0, 00:10:38.910 "w_mbytes_per_sec": 0 00:10:38.910 }, 00:10:38.910 "claimed": false, 00:10:38.910 "zoned": false, 00:10:38.910 "supported_io_types": { 00:10:38.910 "read": true, 00:10:38.910 "write": true, 00:10:38.910 "unmap": true, 00:10:38.910 "flush": true, 00:10:38.910 "reset": true, 00:10:38.910 "nvme_admin": false, 00:10:38.910 "nvme_io": false, 00:10:38.910 "nvme_io_md": false, 00:10:38.910 "write_zeroes": true, 00:10:38.910 "zcopy": true, 00:10:38.910 "get_zone_info": false, 00:10:38.910 "zone_management": false, 00:10:38.910 "zone_append": false, 00:10:38.910 "compare": false, 00:10:38.910 "compare_and_write": false, 00:10:38.910 "abort": true, 00:10:38.910 "seek_hole": false, 00:10:38.910 "seek_data": false, 00:10:38.910 "copy": true, 00:10:38.910 "nvme_iov_md": false 00:10:38.910 }, 00:10:38.910 "memory_domains": [ 00:10:38.910 { 00:10:38.910 "dma_device_id": "system", 00:10:38.910 "dma_device_type": 1 00:10:38.910 }, 00:10:38.910 { 00:10:38.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.910 "dma_device_type": 2 00:10:38.910 } 00:10:38.910 ], 00:10:38.910 "driver_specific": {} 00:10:38.910 } 00:10:38.910 ] 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 [2024-11-27 09:47:39.803373] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.910 [2024-11-27 09:47:39.803479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.910 [2024-11-27 09:47:39.803525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.910 [2024-11-27 09:47:39.805614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.910 "name": "Existed_Raid", 00:10:38.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.910 "strip_size_kb": 64, 00:10:38.910 "state": "configuring", 00:10:38.910 "raid_level": "concat", 00:10:38.910 "superblock": false, 00:10:38.910 "num_base_bdevs": 3, 00:10:38.910 "num_base_bdevs_discovered": 2, 00:10:38.910 "num_base_bdevs_operational": 3, 00:10:38.910 "base_bdevs_list": [ 00:10:38.910 { 00:10:38.910 "name": "BaseBdev1", 00:10:38.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.910 "is_configured": false, 00:10:38.910 "data_offset": 0, 00:10:38.910 "data_size": 0 00:10:38.910 }, 00:10:38.910 { 00:10:38.910 "name": "BaseBdev2", 00:10:38.910 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:38.910 "is_configured": true, 00:10:38.910 "data_offset": 0, 00:10:38.910 "data_size": 65536 00:10:38.910 }, 00:10:38.910 { 00:10:38.910 "name": "BaseBdev3", 00:10:38.910 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:38.910 "is_configured": true, 00:10:38.910 "data_offset": 0, 00:10:38.910 "data_size": 65536 00:10:38.910 } 00:10:38.910 ] 00:10:38.910 }' 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.910 09:47:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.170 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:39.170 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.170 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.171 [2024-11-27 09:47:40.166788] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.171 "name": "Existed_Raid", 00:10:39.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.171 "strip_size_kb": 64, 00:10:39.171 "state": "configuring", 00:10:39.171 "raid_level": "concat", 00:10:39.171 "superblock": false, 00:10:39.171 "num_base_bdevs": 3, 00:10:39.171 "num_base_bdevs_discovered": 1, 00:10:39.171 "num_base_bdevs_operational": 3, 00:10:39.171 "base_bdevs_list": [ 00:10:39.171 { 00:10:39.171 "name": "BaseBdev1", 00:10:39.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.171 "is_configured": false, 00:10:39.171 "data_offset": 0, 00:10:39.171 "data_size": 0 00:10:39.171 }, 00:10:39.171 { 00:10:39.171 "name": null, 00:10:39.171 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:39.171 "is_configured": false, 00:10:39.171 "data_offset": 0, 00:10:39.171 "data_size": 65536 00:10:39.171 }, 00:10:39.171 { 00:10:39.171 "name": "BaseBdev3", 00:10:39.171 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:39.171 "is_configured": true, 00:10:39.171 "data_offset": 0, 00:10:39.171 "data_size": 65536 00:10:39.171 } 00:10:39.171 ] 00:10:39.171 }' 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.171 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.742 [2024-11-27 09:47:40.692339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.742 BaseBdev1 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.742 [ 00:10:39.742 { 00:10:39.742 "name": "BaseBdev1", 00:10:39.742 "aliases": [ 00:10:39.742 "946dacf4-81ea-4670-a8f8-daf8ea82163d" 00:10:39.742 ], 00:10:39.742 "product_name": "Malloc disk", 00:10:39.742 "block_size": 512, 00:10:39.742 "num_blocks": 65536, 00:10:39.742 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:39.742 "assigned_rate_limits": { 00:10:39.742 "rw_ios_per_sec": 0, 00:10:39.742 "rw_mbytes_per_sec": 0, 00:10:39.742 "r_mbytes_per_sec": 0, 00:10:39.742 "w_mbytes_per_sec": 0 00:10:39.742 }, 00:10:39.742 "claimed": true, 00:10:39.742 "claim_type": "exclusive_write", 00:10:39.742 "zoned": false, 00:10:39.742 "supported_io_types": { 00:10:39.742 "read": true, 00:10:39.742 "write": true, 00:10:39.742 "unmap": true, 00:10:39.742 "flush": true, 00:10:39.742 "reset": true, 00:10:39.742 "nvme_admin": false, 00:10:39.742 "nvme_io": false, 00:10:39.742 "nvme_io_md": false, 00:10:39.742 "write_zeroes": true, 00:10:39.742 "zcopy": true, 00:10:39.742 "get_zone_info": false, 00:10:39.742 "zone_management": false, 00:10:39.742 "zone_append": false, 00:10:39.742 "compare": false, 00:10:39.742 "compare_and_write": false, 00:10:39.742 "abort": true, 00:10:39.742 "seek_hole": false, 00:10:39.742 "seek_data": false, 00:10:39.742 "copy": true, 00:10:39.742 "nvme_iov_md": false 00:10:39.742 }, 00:10:39.742 "memory_domains": [ 00:10:39.742 { 00:10:39.742 "dma_device_id": "system", 00:10:39.742 "dma_device_type": 1 00:10:39.742 }, 00:10:39.742 { 00:10:39.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.742 "dma_device_type": 2 00:10:39.742 } 00:10:39.742 ], 00:10:39.742 "driver_specific": {} 00:10:39.742 } 00:10:39.742 ] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.742 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.742 "name": "Existed_Raid", 00:10:39.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.742 "strip_size_kb": 64, 00:10:39.742 "state": "configuring", 00:10:39.742 "raid_level": "concat", 00:10:39.742 "superblock": false, 00:10:39.742 "num_base_bdevs": 3, 00:10:39.742 "num_base_bdevs_discovered": 2, 00:10:39.742 "num_base_bdevs_operational": 3, 00:10:39.743 "base_bdevs_list": [ 00:10:39.743 { 00:10:39.743 "name": "BaseBdev1", 00:10:39.743 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:39.743 "is_configured": true, 00:10:39.743 "data_offset": 0, 00:10:39.743 "data_size": 65536 00:10:39.743 }, 00:10:39.743 { 00:10:39.743 "name": null, 00:10:39.743 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:39.743 "is_configured": false, 00:10:39.743 "data_offset": 0, 00:10:39.743 "data_size": 65536 00:10:39.743 }, 00:10:39.743 { 00:10:39.743 "name": "BaseBdev3", 00:10:39.743 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:39.743 "is_configured": true, 00:10:39.743 "data_offset": 0, 00:10:39.743 "data_size": 65536 00:10:39.743 } 00:10:39.743 ] 00:10:39.743 }' 00:10:39.743 09:47:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.743 09:47:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.313 [2024-11-27 09:47:41.211511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.313 "name": "Existed_Raid", 00:10:40.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.313 "strip_size_kb": 64, 00:10:40.313 "state": "configuring", 00:10:40.313 "raid_level": "concat", 00:10:40.313 "superblock": false, 00:10:40.313 "num_base_bdevs": 3, 00:10:40.313 "num_base_bdevs_discovered": 1, 00:10:40.313 "num_base_bdevs_operational": 3, 00:10:40.313 "base_bdevs_list": [ 00:10:40.313 { 00:10:40.313 "name": "BaseBdev1", 00:10:40.313 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:40.313 "is_configured": true, 00:10:40.313 "data_offset": 0, 00:10:40.313 "data_size": 65536 00:10:40.313 }, 00:10:40.313 { 00:10:40.313 "name": null, 00:10:40.313 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:40.313 "is_configured": false, 00:10:40.313 "data_offset": 0, 00:10:40.313 "data_size": 65536 00:10:40.313 }, 00:10:40.313 { 00:10:40.313 "name": null, 00:10:40.313 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:40.313 "is_configured": false, 00:10:40.313 "data_offset": 0, 00:10:40.313 "data_size": 65536 00:10:40.313 } 00:10:40.313 ] 00:10:40.313 }' 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.313 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.574 [2024-11-27 09:47:41.646754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.574 "name": "Existed_Raid", 00:10:40.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.574 "strip_size_kb": 64, 00:10:40.574 "state": "configuring", 00:10:40.574 "raid_level": "concat", 00:10:40.574 "superblock": false, 00:10:40.574 "num_base_bdevs": 3, 00:10:40.574 "num_base_bdevs_discovered": 2, 00:10:40.574 "num_base_bdevs_operational": 3, 00:10:40.574 "base_bdevs_list": [ 00:10:40.574 { 00:10:40.574 "name": "BaseBdev1", 00:10:40.574 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:40.574 "is_configured": true, 00:10:40.574 "data_offset": 0, 00:10:40.574 "data_size": 65536 00:10:40.574 }, 00:10:40.574 { 00:10:40.574 "name": null, 00:10:40.574 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:40.574 "is_configured": false, 00:10:40.574 "data_offset": 0, 00:10:40.574 "data_size": 65536 00:10:40.574 }, 00:10:40.574 { 00:10:40.574 "name": "BaseBdev3", 00:10:40.574 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:40.574 "is_configured": true, 00:10:40.574 "data_offset": 0, 00:10:40.574 "data_size": 65536 00:10:40.574 } 00:10:40.574 ] 00:10:40.574 }' 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.574 09:47:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.144 [2024-11-27 09:47:42.133946] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.144 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.404 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.404 "name": "Existed_Raid", 00:10:41.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.404 "strip_size_kb": 64, 00:10:41.404 "state": "configuring", 00:10:41.404 "raid_level": "concat", 00:10:41.404 "superblock": false, 00:10:41.404 "num_base_bdevs": 3, 00:10:41.404 "num_base_bdevs_discovered": 1, 00:10:41.404 "num_base_bdevs_operational": 3, 00:10:41.404 "base_bdevs_list": [ 00:10:41.404 { 00:10:41.404 "name": null, 00:10:41.404 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:41.404 "is_configured": false, 00:10:41.404 "data_offset": 0, 00:10:41.404 "data_size": 65536 00:10:41.404 }, 00:10:41.404 { 00:10:41.404 "name": null, 00:10:41.404 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:41.404 "is_configured": false, 00:10:41.404 "data_offset": 0, 00:10:41.404 "data_size": 65536 00:10:41.404 }, 00:10:41.404 { 00:10:41.404 "name": "BaseBdev3", 00:10:41.404 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:41.404 "is_configured": true, 00:10:41.404 "data_offset": 0, 00:10:41.404 "data_size": 65536 00:10:41.404 } 00:10:41.404 ] 00:10:41.404 }' 00:10:41.404 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.404 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.664 [2024-11-27 09:47:42.694906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.664 "name": "Existed_Raid", 00:10:41.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.664 "strip_size_kb": 64, 00:10:41.664 "state": "configuring", 00:10:41.664 "raid_level": "concat", 00:10:41.664 "superblock": false, 00:10:41.664 "num_base_bdevs": 3, 00:10:41.664 "num_base_bdevs_discovered": 2, 00:10:41.664 "num_base_bdevs_operational": 3, 00:10:41.664 "base_bdevs_list": [ 00:10:41.664 { 00:10:41.664 "name": null, 00:10:41.664 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:41.664 "is_configured": false, 00:10:41.664 "data_offset": 0, 00:10:41.664 "data_size": 65536 00:10:41.664 }, 00:10:41.664 { 00:10:41.664 "name": "BaseBdev2", 00:10:41.664 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:41.664 "is_configured": true, 00:10:41.664 "data_offset": 0, 00:10:41.664 "data_size": 65536 00:10:41.664 }, 00:10:41.664 { 00:10:41.664 "name": "BaseBdev3", 00:10:41.664 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:41.664 "is_configured": true, 00:10:41.664 "data_offset": 0, 00:10:41.664 "data_size": 65536 00:10:41.664 } 00:10:41.664 ] 00:10:41.664 }' 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.664 09:47:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 946dacf4-81ea-4670-a8f8-daf8ea82163d 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 [2024-11-27 09:47:43.272882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:42.234 [2024-11-27 09:47:43.273047] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:42.234 [2024-11-27 09:47:43.273079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:42.234 [2024-11-27 09:47:43.273417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:42.234 [2024-11-27 09:47:43.273658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:42.234 [2024-11-27 09:47:43.273702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:42.234 [2024-11-27 09:47:43.274038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.234 NewBaseBdev 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.234 [ 00:10:42.234 { 00:10:42.234 "name": "NewBaseBdev", 00:10:42.234 "aliases": [ 00:10:42.234 "946dacf4-81ea-4670-a8f8-daf8ea82163d" 00:10:42.234 ], 00:10:42.234 "product_name": "Malloc disk", 00:10:42.234 "block_size": 512, 00:10:42.234 "num_blocks": 65536, 00:10:42.234 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:42.234 "assigned_rate_limits": { 00:10:42.234 "rw_ios_per_sec": 0, 00:10:42.234 "rw_mbytes_per_sec": 0, 00:10:42.234 "r_mbytes_per_sec": 0, 00:10:42.234 "w_mbytes_per_sec": 0 00:10:42.234 }, 00:10:42.234 "claimed": true, 00:10:42.234 "claim_type": "exclusive_write", 00:10:42.234 "zoned": false, 00:10:42.234 "supported_io_types": { 00:10:42.234 "read": true, 00:10:42.234 "write": true, 00:10:42.234 "unmap": true, 00:10:42.234 "flush": true, 00:10:42.234 "reset": true, 00:10:42.234 "nvme_admin": false, 00:10:42.234 "nvme_io": false, 00:10:42.234 "nvme_io_md": false, 00:10:42.234 "write_zeroes": true, 00:10:42.234 "zcopy": true, 00:10:42.234 "get_zone_info": false, 00:10:42.234 "zone_management": false, 00:10:42.234 "zone_append": false, 00:10:42.234 "compare": false, 00:10:42.234 "compare_and_write": false, 00:10:42.234 "abort": true, 00:10:42.234 "seek_hole": false, 00:10:42.234 "seek_data": false, 00:10:42.234 "copy": true, 00:10:42.234 "nvme_iov_md": false 00:10:42.234 }, 00:10:42.234 "memory_domains": [ 00:10:42.234 { 00:10:42.234 "dma_device_id": "system", 00:10:42.234 "dma_device_type": 1 00:10:42.234 }, 00:10:42.234 { 00:10:42.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.234 "dma_device_type": 2 00:10:42.234 } 00:10:42.234 ], 00:10:42.234 "driver_specific": {} 00:10:42.234 } 00:10:42.234 ] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.234 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.235 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.494 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.494 "name": "Existed_Raid", 00:10:42.494 "uuid": "ae7a7729-841c-4f96-8942-d810cb3b90ff", 00:10:42.494 "strip_size_kb": 64, 00:10:42.494 "state": "online", 00:10:42.494 "raid_level": "concat", 00:10:42.494 "superblock": false, 00:10:42.494 "num_base_bdevs": 3, 00:10:42.494 "num_base_bdevs_discovered": 3, 00:10:42.494 "num_base_bdevs_operational": 3, 00:10:42.494 "base_bdevs_list": [ 00:10:42.494 { 00:10:42.494 "name": "NewBaseBdev", 00:10:42.494 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:42.494 "is_configured": true, 00:10:42.494 "data_offset": 0, 00:10:42.494 "data_size": 65536 00:10:42.494 }, 00:10:42.494 { 00:10:42.494 "name": "BaseBdev2", 00:10:42.494 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:42.494 "is_configured": true, 00:10:42.494 "data_offset": 0, 00:10:42.494 "data_size": 65536 00:10:42.494 }, 00:10:42.494 { 00:10:42.494 "name": "BaseBdev3", 00:10:42.494 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:42.494 "is_configured": true, 00:10:42.494 "data_offset": 0, 00:10:42.494 "data_size": 65536 00:10:42.494 } 00:10:42.494 ] 00:10:42.494 }' 00:10:42.494 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.494 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.755 [2024-11-27 09:47:43.728494] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.755 "name": "Existed_Raid", 00:10:42.755 "aliases": [ 00:10:42.755 "ae7a7729-841c-4f96-8942-d810cb3b90ff" 00:10:42.755 ], 00:10:42.755 "product_name": "Raid Volume", 00:10:42.755 "block_size": 512, 00:10:42.755 "num_blocks": 196608, 00:10:42.755 "uuid": "ae7a7729-841c-4f96-8942-d810cb3b90ff", 00:10:42.755 "assigned_rate_limits": { 00:10:42.755 "rw_ios_per_sec": 0, 00:10:42.755 "rw_mbytes_per_sec": 0, 00:10:42.755 "r_mbytes_per_sec": 0, 00:10:42.755 "w_mbytes_per_sec": 0 00:10:42.755 }, 00:10:42.755 "claimed": false, 00:10:42.755 "zoned": false, 00:10:42.755 "supported_io_types": { 00:10:42.755 "read": true, 00:10:42.755 "write": true, 00:10:42.755 "unmap": true, 00:10:42.755 "flush": true, 00:10:42.755 "reset": true, 00:10:42.755 "nvme_admin": false, 00:10:42.755 "nvme_io": false, 00:10:42.755 "nvme_io_md": false, 00:10:42.755 "write_zeroes": true, 00:10:42.755 "zcopy": false, 00:10:42.755 "get_zone_info": false, 00:10:42.755 "zone_management": false, 00:10:42.755 "zone_append": false, 00:10:42.755 "compare": false, 00:10:42.755 "compare_and_write": false, 00:10:42.755 "abort": false, 00:10:42.755 "seek_hole": false, 00:10:42.755 "seek_data": false, 00:10:42.755 "copy": false, 00:10:42.755 "nvme_iov_md": false 00:10:42.755 }, 00:10:42.755 "memory_domains": [ 00:10:42.755 { 00:10:42.755 "dma_device_id": "system", 00:10:42.755 "dma_device_type": 1 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.755 "dma_device_type": 2 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "dma_device_id": "system", 00:10:42.755 "dma_device_type": 1 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.755 "dma_device_type": 2 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "dma_device_id": "system", 00:10:42.755 "dma_device_type": 1 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.755 "dma_device_type": 2 00:10:42.755 } 00:10:42.755 ], 00:10:42.755 "driver_specific": { 00:10:42.755 "raid": { 00:10:42.755 "uuid": "ae7a7729-841c-4f96-8942-d810cb3b90ff", 00:10:42.755 "strip_size_kb": 64, 00:10:42.755 "state": "online", 00:10:42.755 "raid_level": "concat", 00:10:42.755 "superblock": false, 00:10:42.755 "num_base_bdevs": 3, 00:10:42.755 "num_base_bdevs_discovered": 3, 00:10:42.755 "num_base_bdevs_operational": 3, 00:10:42.755 "base_bdevs_list": [ 00:10:42.755 { 00:10:42.755 "name": "NewBaseBdev", 00:10:42.755 "uuid": "946dacf4-81ea-4670-a8f8-daf8ea82163d", 00:10:42.755 "is_configured": true, 00:10:42.755 "data_offset": 0, 00:10:42.755 "data_size": 65536 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "name": "BaseBdev2", 00:10:42.755 "uuid": "00fe6274-45f2-409a-9aa0-2f9574698f0a", 00:10:42.755 "is_configured": true, 00:10:42.755 "data_offset": 0, 00:10:42.755 "data_size": 65536 00:10:42.755 }, 00:10:42.755 { 00:10:42.755 "name": "BaseBdev3", 00:10:42.755 "uuid": "3adc3c96-a184-4a27-98a2-918a43de908d", 00:10:42.755 "is_configured": true, 00:10:42.755 "data_offset": 0, 00:10:42.755 "data_size": 65536 00:10:42.755 } 00:10:42.755 ] 00:10:42.755 } 00:10:42.755 } 00:10:42.755 }' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.755 BaseBdev2 00:10:42.755 BaseBdev3' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.755 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.015 [2024-11-27 09:47:43.955745] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.015 [2024-11-27 09:47:43.955819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.015 [2024-11-27 09:47:43.955916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.015 [2024-11-27 09:47:43.955995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.015 [2024-11-27 09:47:43.956022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65879 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65879 ']' 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65879 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65879 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.015 09:47:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.016 killing process with pid 65879 00:10:43.016 09:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65879' 00:10:43.016 09:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65879 00:10:43.016 [2024-11-27 09:47:44.002540] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.016 09:47:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65879 00:10:43.275 [2024-11-27 09:47:44.333812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:44.655 00:10:44.655 real 0m10.453s 00:10:44.655 user 0m16.267s 00:10:44.655 sys 0m1.935s 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.655 ************************************ 00:10:44.655 END TEST raid_state_function_test 00:10:44.655 ************************************ 00:10:44.655 09:47:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:44.655 09:47:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.655 09:47:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.655 09:47:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.655 ************************************ 00:10:44.655 START TEST raid_state_function_test_sb 00:10:44.655 ************************************ 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:44.655 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66495 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66495' 00:10:44.656 Process raid pid: 66495 00:10:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66495 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66495 ']' 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:44.656 09:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.656 [2024-11-27 09:47:45.728572] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:44.656 [2024-11-27 09:47:45.728832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.915 [2024-11-27 09:47:45.908992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.175 [2024-11-27 09:47:46.051659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.175 [2024-11-27 09:47:46.296358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.175 [2024-11-27 09:47:46.296514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.744 [2024-11-27 09:47:46.574133] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.744 [2024-11-27 09:47:46.574257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.744 [2024-11-27 09:47:46.574292] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.744 [2024-11-27 09:47:46.574318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.744 [2024-11-27 09:47:46.574337] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.744 [2024-11-27 09:47:46.574368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.744 "name": "Existed_Raid", 00:10:45.744 "uuid": "bf1a8a97-3ba3-4819-ad47-6cf247345873", 00:10:45.744 "strip_size_kb": 64, 00:10:45.744 "state": "configuring", 00:10:45.744 "raid_level": "concat", 00:10:45.744 "superblock": true, 00:10:45.744 "num_base_bdevs": 3, 00:10:45.744 "num_base_bdevs_discovered": 0, 00:10:45.744 "num_base_bdevs_operational": 3, 00:10:45.744 "base_bdevs_list": [ 00:10:45.744 { 00:10:45.744 "name": "BaseBdev1", 00:10:45.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.744 "is_configured": false, 00:10:45.744 "data_offset": 0, 00:10:45.744 "data_size": 0 00:10:45.744 }, 00:10:45.744 { 00:10:45.744 "name": "BaseBdev2", 00:10:45.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.744 "is_configured": false, 00:10:45.744 "data_offset": 0, 00:10:45.744 "data_size": 0 00:10:45.744 }, 00:10:45.744 { 00:10:45.744 "name": "BaseBdev3", 00:10:45.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.744 "is_configured": false, 00:10:45.744 "data_offset": 0, 00:10:45.744 "data_size": 0 00:10:45.744 } 00:10:45.744 ] 00:10:45.744 }' 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.744 09:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.006 [2024-11-27 09:47:47.021281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.006 [2024-11-27 09:47:47.021323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.006 [2024-11-27 09:47:47.033247] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.006 [2024-11-27 09:47:47.033366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.006 [2024-11-27 09:47:47.033398] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.006 [2024-11-27 09:47:47.033423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.006 [2024-11-27 09:47:47.033441] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.006 [2024-11-27 09:47:47.033463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.006 [2024-11-27 09:47:47.087976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.006 BaseBdev1 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.006 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.007 [ 00:10:46.007 { 00:10:46.007 "name": "BaseBdev1", 00:10:46.007 "aliases": [ 00:10:46.007 "eca76890-cf23-4e88-8826-8f47138b1c35" 00:10:46.007 ], 00:10:46.007 "product_name": "Malloc disk", 00:10:46.007 "block_size": 512, 00:10:46.007 "num_blocks": 65536, 00:10:46.007 "uuid": "eca76890-cf23-4e88-8826-8f47138b1c35", 00:10:46.007 "assigned_rate_limits": { 00:10:46.007 "rw_ios_per_sec": 0, 00:10:46.007 "rw_mbytes_per_sec": 0, 00:10:46.007 "r_mbytes_per_sec": 0, 00:10:46.007 "w_mbytes_per_sec": 0 00:10:46.007 }, 00:10:46.007 "claimed": true, 00:10:46.007 "claim_type": "exclusive_write", 00:10:46.007 "zoned": false, 00:10:46.007 "supported_io_types": { 00:10:46.007 "read": true, 00:10:46.007 "write": true, 00:10:46.007 "unmap": true, 00:10:46.007 "flush": true, 00:10:46.007 "reset": true, 00:10:46.007 "nvme_admin": false, 00:10:46.007 "nvme_io": false, 00:10:46.007 "nvme_io_md": false, 00:10:46.007 "write_zeroes": true, 00:10:46.007 "zcopy": true, 00:10:46.007 "get_zone_info": false, 00:10:46.007 "zone_management": false, 00:10:46.007 "zone_append": false, 00:10:46.007 "compare": false, 00:10:46.007 "compare_and_write": false, 00:10:46.007 "abort": true, 00:10:46.007 "seek_hole": false, 00:10:46.007 "seek_data": false, 00:10:46.007 "copy": true, 00:10:46.007 "nvme_iov_md": false 00:10:46.007 }, 00:10:46.007 "memory_domains": [ 00:10:46.007 { 00:10:46.007 "dma_device_id": "system", 00:10:46.007 "dma_device_type": 1 00:10:46.007 }, 00:10:46.007 { 00:10:46.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.007 "dma_device_type": 2 00:10:46.007 } 00:10:46.007 ], 00:10:46.007 "driver_specific": {} 00:10:46.007 } 00:10:46.007 ] 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.007 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.296 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.296 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.296 "name": "Existed_Raid", 00:10:46.296 "uuid": "b2bdc1dd-4167-49b2-9b6e-e56fd8bd1697", 00:10:46.296 "strip_size_kb": 64, 00:10:46.296 "state": "configuring", 00:10:46.296 "raid_level": "concat", 00:10:46.296 "superblock": true, 00:10:46.296 "num_base_bdevs": 3, 00:10:46.296 "num_base_bdevs_discovered": 1, 00:10:46.296 "num_base_bdevs_operational": 3, 00:10:46.296 "base_bdevs_list": [ 00:10:46.296 { 00:10:46.296 "name": "BaseBdev1", 00:10:46.296 "uuid": "eca76890-cf23-4e88-8826-8f47138b1c35", 00:10:46.296 "is_configured": true, 00:10:46.296 "data_offset": 2048, 00:10:46.296 "data_size": 63488 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "name": "BaseBdev2", 00:10:46.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.296 "is_configured": false, 00:10:46.296 "data_offset": 0, 00:10:46.296 "data_size": 0 00:10:46.296 }, 00:10:46.296 { 00:10:46.296 "name": "BaseBdev3", 00:10:46.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.296 "is_configured": false, 00:10:46.296 "data_offset": 0, 00:10:46.296 "data_size": 0 00:10:46.296 } 00:10:46.296 ] 00:10:46.296 }' 00:10:46.296 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.296 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.557 [2024-11-27 09:47:47.547249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.557 [2024-11-27 09:47:47.547378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.557 [2024-11-27 09:47:47.559279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.557 [2024-11-27 09:47:47.561676] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.557 [2024-11-27 09:47:47.561760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.557 [2024-11-27 09:47:47.561791] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.557 [2024-11-27 09:47:47.561815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.557 "name": "Existed_Raid", 00:10:46.557 "uuid": "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8", 00:10:46.557 "strip_size_kb": 64, 00:10:46.557 "state": "configuring", 00:10:46.557 "raid_level": "concat", 00:10:46.557 "superblock": true, 00:10:46.557 "num_base_bdevs": 3, 00:10:46.557 "num_base_bdevs_discovered": 1, 00:10:46.557 "num_base_bdevs_operational": 3, 00:10:46.557 "base_bdevs_list": [ 00:10:46.557 { 00:10:46.557 "name": "BaseBdev1", 00:10:46.557 "uuid": "eca76890-cf23-4e88-8826-8f47138b1c35", 00:10:46.557 "is_configured": true, 00:10:46.557 "data_offset": 2048, 00:10:46.557 "data_size": 63488 00:10:46.557 }, 00:10:46.557 { 00:10:46.557 "name": "BaseBdev2", 00:10:46.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.557 "is_configured": false, 00:10:46.557 "data_offset": 0, 00:10:46.557 "data_size": 0 00:10:46.557 }, 00:10:46.557 { 00:10:46.557 "name": "BaseBdev3", 00:10:46.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.557 "is_configured": false, 00:10:46.557 "data_offset": 0, 00:10:46.557 "data_size": 0 00:10:46.557 } 00:10:46.557 ] 00:10:46.557 }' 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.557 09:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.127 [2024-11-27 09:47:48.056776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.127 BaseBdev2 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.127 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.127 [ 00:10:47.127 { 00:10:47.127 "name": "BaseBdev2", 00:10:47.127 "aliases": [ 00:10:47.127 "0de62fa8-92e0-48e5-b019-21a8693852dc" 00:10:47.127 ], 00:10:47.127 "product_name": "Malloc disk", 00:10:47.127 "block_size": 512, 00:10:47.127 "num_blocks": 65536, 00:10:47.127 "uuid": "0de62fa8-92e0-48e5-b019-21a8693852dc", 00:10:47.127 "assigned_rate_limits": { 00:10:47.127 "rw_ios_per_sec": 0, 00:10:47.127 "rw_mbytes_per_sec": 0, 00:10:47.127 "r_mbytes_per_sec": 0, 00:10:47.127 "w_mbytes_per_sec": 0 00:10:47.127 }, 00:10:47.127 "claimed": true, 00:10:47.127 "claim_type": "exclusive_write", 00:10:47.128 "zoned": false, 00:10:47.128 "supported_io_types": { 00:10:47.128 "read": true, 00:10:47.128 "write": true, 00:10:47.128 "unmap": true, 00:10:47.128 "flush": true, 00:10:47.128 "reset": true, 00:10:47.128 "nvme_admin": false, 00:10:47.128 "nvme_io": false, 00:10:47.128 "nvme_io_md": false, 00:10:47.128 "write_zeroes": true, 00:10:47.128 "zcopy": true, 00:10:47.128 "get_zone_info": false, 00:10:47.128 "zone_management": false, 00:10:47.128 "zone_append": false, 00:10:47.128 "compare": false, 00:10:47.128 "compare_and_write": false, 00:10:47.128 "abort": true, 00:10:47.128 "seek_hole": false, 00:10:47.128 "seek_data": false, 00:10:47.128 "copy": true, 00:10:47.128 "nvme_iov_md": false 00:10:47.128 }, 00:10:47.128 "memory_domains": [ 00:10:47.128 { 00:10:47.128 "dma_device_id": "system", 00:10:47.128 "dma_device_type": 1 00:10:47.128 }, 00:10:47.128 { 00:10:47.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.128 "dma_device_type": 2 00:10:47.128 } 00:10:47.128 ], 00:10:47.128 "driver_specific": {} 00:10:47.128 } 00:10:47.128 ] 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.128 "name": "Existed_Raid", 00:10:47.128 "uuid": "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8", 00:10:47.128 "strip_size_kb": 64, 00:10:47.128 "state": "configuring", 00:10:47.128 "raid_level": "concat", 00:10:47.128 "superblock": true, 00:10:47.128 "num_base_bdevs": 3, 00:10:47.128 "num_base_bdevs_discovered": 2, 00:10:47.128 "num_base_bdevs_operational": 3, 00:10:47.128 "base_bdevs_list": [ 00:10:47.128 { 00:10:47.128 "name": "BaseBdev1", 00:10:47.128 "uuid": "eca76890-cf23-4e88-8826-8f47138b1c35", 00:10:47.128 "is_configured": true, 00:10:47.128 "data_offset": 2048, 00:10:47.128 "data_size": 63488 00:10:47.128 }, 00:10:47.128 { 00:10:47.128 "name": "BaseBdev2", 00:10:47.128 "uuid": "0de62fa8-92e0-48e5-b019-21a8693852dc", 00:10:47.128 "is_configured": true, 00:10:47.128 "data_offset": 2048, 00:10:47.128 "data_size": 63488 00:10:47.128 }, 00:10:47.128 { 00:10:47.128 "name": "BaseBdev3", 00:10:47.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.128 "is_configured": false, 00:10:47.128 "data_offset": 0, 00:10:47.128 "data_size": 0 00:10:47.128 } 00:10:47.128 ] 00:10:47.128 }' 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.128 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.697 [2024-11-27 09:47:48.581497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.697 BaseBdev3 00:10:47.697 [2024-11-27 09:47:48.581933] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.697 [2024-11-27 09:47:48.581965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:47.697 [2024-11-27 09:47:48.582300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:47.697 [2024-11-27 09:47:48.582479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.697 [2024-11-27 09:47:48.582490] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.697 [2024-11-27 09:47:48.582652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.697 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.698 [ 00:10:47.698 { 00:10:47.698 "name": "BaseBdev3", 00:10:47.698 "aliases": [ 00:10:47.698 "b7a51359-5962-4e1d-b455-195f2b5df1f3" 00:10:47.698 ], 00:10:47.698 "product_name": "Malloc disk", 00:10:47.698 "block_size": 512, 00:10:47.698 "num_blocks": 65536, 00:10:47.698 "uuid": "b7a51359-5962-4e1d-b455-195f2b5df1f3", 00:10:47.698 "assigned_rate_limits": { 00:10:47.698 "rw_ios_per_sec": 0, 00:10:47.698 "rw_mbytes_per_sec": 0, 00:10:47.698 "r_mbytes_per_sec": 0, 00:10:47.698 "w_mbytes_per_sec": 0 00:10:47.698 }, 00:10:47.698 "claimed": true, 00:10:47.698 "claim_type": "exclusive_write", 00:10:47.698 "zoned": false, 00:10:47.698 "supported_io_types": { 00:10:47.698 "read": true, 00:10:47.698 "write": true, 00:10:47.698 "unmap": true, 00:10:47.698 "flush": true, 00:10:47.698 "reset": true, 00:10:47.698 "nvme_admin": false, 00:10:47.698 "nvme_io": false, 00:10:47.698 "nvme_io_md": false, 00:10:47.698 "write_zeroes": true, 00:10:47.698 "zcopy": true, 00:10:47.698 "get_zone_info": false, 00:10:47.698 "zone_management": false, 00:10:47.698 "zone_append": false, 00:10:47.698 "compare": false, 00:10:47.698 "compare_and_write": false, 00:10:47.698 "abort": true, 00:10:47.698 "seek_hole": false, 00:10:47.698 "seek_data": false, 00:10:47.698 "copy": true, 00:10:47.698 "nvme_iov_md": false 00:10:47.698 }, 00:10:47.698 "memory_domains": [ 00:10:47.698 { 00:10:47.698 "dma_device_id": "system", 00:10:47.698 "dma_device_type": 1 00:10:47.698 }, 00:10:47.698 { 00:10:47.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.698 "dma_device_type": 2 00:10:47.698 } 00:10:47.698 ], 00:10:47.698 "driver_specific": {} 00:10:47.698 } 00:10:47.698 ] 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.698 "name": "Existed_Raid", 00:10:47.698 "uuid": "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8", 00:10:47.698 "strip_size_kb": 64, 00:10:47.698 "state": "online", 00:10:47.698 "raid_level": "concat", 00:10:47.698 "superblock": true, 00:10:47.698 "num_base_bdevs": 3, 00:10:47.698 "num_base_bdevs_discovered": 3, 00:10:47.698 "num_base_bdevs_operational": 3, 00:10:47.698 "base_bdevs_list": [ 00:10:47.698 { 00:10:47.698 "name": "BaseBdev1", 00:10:47.698 "uuid": "eca76890-cf23-4e88-8826-8f47138b1c35", 00:10:47.698 "is_configured": true, 00:10:47.698 "data_offset": 2048, 00:10:47.698 "data_size": 63488 00:10:47.698 }, 00:10:47.698 { 00:10:47.698 "name": "BaseBdev2", 00:10:47.698 "uuid": "0de62fa8-92e0-48e5-b019-21a8693852dc", 00:10:47.698 "is_configured": true, 00:10:47.698 "data_offset": 2048, 00:10:47.698 "data_size": 63488 00:10:47.698 }, 00:10:47.698 { 00:10:47.698 "name": "BaseBdev3", 00:10:47.698 "uuid": "b7a51359-5962-4e1d-b455-195f2b5df1f3", 00:10:47.698 "is_configured": true, 00:10:47.698 "data_offset": 2048, 00:10:47.698 "data_size": 63488 00:10:47.698 } 00:10:47.698 ] 00:10:47.698 }' 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.698 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.958 09:47:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.958 [2024-11-27 09:47:49.001241] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.958 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.958 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.958 "name": "Existed_Raid", 00:10:47.958 "aliases": [ 00:10:47.958 "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8" 00:10:47.958 ], 00:10:47.958 "product_name": "Raid Volume", 00:10:47.958 "block_size": 512, 00:10:47.958 "num_blocks": 190464, 00:10:47.958 "uuid": "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8", 00:10:47.958 "assigned_rate_limits": { 00:10:47.958 "rw_ios_per_sec": 0, 00:10:47.958 "rw_mbytes_per_sec": 0, 00:10:47.958 "r_mbytes_per_sec": 0, 00:10:47.958 "w_mbytes_per_sec": 0 00:10:47.958 }, 00:10:47.958 "claimed": false, 00:10:47.958 "zoned": false, 00:10:47.958 "supported_io_types": { 00:10:47.958 "read": true, 00:10:47.958 "write": true, 00:10:47.958 "unmap": true, 00:10:47.958 "flush": true, 00:10:47.958 "reset": true, 00:10:47.958 "nvme_admin": false, 00:10:47.958 "nvme_io": false, 00:10:47.958 "nvme_io_md": false, 00:10:47.958 "write_zeroes": true, 00:10:47.958 "zcopy": false, 00:10:47.958 "get_zone_info": false, 00:10:47.958 "zone_management": false, 00:10:47.958 "zone_append": false, 00:10:47.958 "compare": false, 00:10:47.958 "compare_and_write": false, 00:10:47.958 "abort": false, 00:10:47.958 "seek_hole": false, 00:10:47.958 "seek_data": false, 00:10:47.958 "copy": false, 00:10:47.958 "nvme_iov_md": false 00:10:47.958 }, 00:10:47.958 "memory_domains": [ 00:10:47.958 { 00:10:47.958 "dma_device_id": "system", 00:10:47.958 "dma_device_type": 1 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.958 "dma_device_type": 2 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "dma_device_id": "system", 00:10:47.958 "dma_device_type": 1 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.958 "dma_device_type": 2 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "dma_device_id": "system", 00:10:47.958 "dma_device_type": 1 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.958 "dma_device_type": 2 00:10:47.958 } 00:10:47.958 ], 00:10:47.958 "driver_specific": { 00:10:47.958 "raid": { 00:10:47.958 "uuid": "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8", 00:10:47.958 "strip_size_kb": 64, 00:10:47.958 "state": "online", 00:10:47.958 "raid_level": "concat", 00:10:47.958 "superblock": true, 00:10:47.958 "num_base_bdevs": 3, 00:10:47.958 "num_base_bdevs_discovered": 3, 00:10:47.958 "num_base_bdevs_operational": 3, 00:10:47.958 "base_bdevs_list": [ 00:10:47.958 { 00:10:47.958 "name": "BaseBdev1", 00:10:47.958 "uuid": "eca76890-cf23-4e88-8826-8f47138b1c35", 00:10:47.958 "is_configured": true, 00:10:47.958 "data_offset": 2048, 00:10:47.958 "data_size": 63488 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "name": "BaseBdev2", 00:10:47.958 "uuid": "0de62fa8-92e0-48e5-b019-21a8693852dc", 00:10:47.958 "is_configured": true, 00:10:47.958 "data_offset": 2048, 00:10:47.958 "data_size": 63488 00:10:47.958 }, 00:10:47.958 { 00:10:47.958 "name": "BaseBdev3", 00:10:47.958 "uuid": "b7a51359-5962-4e1d-b455-195f2b5df1f3", 00:10:47.958 "is_configured": true, 00:10:47.958 "data_offset": 2048, 00:10:47.959 "data_size": 63488 00:10:47.959 } 00:10:47.959 ] 00:10:47.959 } 00:10:47.959 } 00:10:47.959 }' 00:10:47.959 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.959 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.959 BaseBdev2 00:10:47.959 BaseBdev3' 00:10:47.959 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.219 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.219 [2024-11-27 09:47:49.260517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.219 [2024-11-27 09:47:49.260598] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.219 [2024-11-27 09:47:49.260706] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.479 "name": "Existed_Raid", 00:10:48.479 "uuid": "d7e7b71d-8c9b-4659-aad6-882cfa4a28b8", 00:10:48.479 "strip_size_kb": 64, 00:10:48.479 "state": "offline", 00:10:48.479 "raid_level": "concat", 00:10:48.479 "superblock": true, 00:10:48.479 "num_base_bdevs": 3, 00:10:48.479 "num_base_bdevs_discovered": 2, 00:10:48.479 "num_base_bdevs_operational": 2, 00:10:48.479 "base_bdevs_list": [ 00:10:48.479 { 00:10:48.479 "name": null, 00:10:48.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.479 "is_configured": false, 00:10:48.479 "data_offset": 0, 00:10:48.479 "data_size": 63488 00:10:48.479 }, 00:10:48.479 { 00:10:48.479 "name": "BaseBdev2", 00:10:48.479 "uuid": "0de62fa8-92e0-48e5-b019-21a8693852dc", 00:10:48.479 "is_configured": true, 00:10:48.479 "data_offset": 2048, 00:10:48.479 "data_size": 63488 00:10:48.479 }, 00:10:48.479 { 00:10:48.479 "name": "BaseBdev3", 00:10:48.479 "uuid": "b7a51359-5962-4e1d-b455-195f2b5df1f3", 00:10:48.479 "is_configured": true, 00:10:48.479 "data_offset": 2048, 00:10:48.479 "data_size": 63488 00:10:48.479 } 00:10:48.479 ] 00:10:48.479 }' 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.479 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.739 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.739 [2024-11-27 09:47:49.859917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.998 09:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.998 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.998 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.998 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.998 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.998 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.998 [2024-11-27 09:47:50.025141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.998 [2024-11-27 09:47:50.025207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 BaseBdev2 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 [ 00:10:49.259 { 00:10:49.259 "name": "BaseBdev2", 00:10:49.259 "aliases": [ 00:10:49.259 "7965add6-b66e-460a-977a-f5fb872eaf5a" 00:10:49.259 ], 00:10:49.259 "product_name": "Malloc disk", 00:10:49.259 "block_size": 512, 00:10:49.259 "num_blocks": 65536, 00:10:49.259 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:49.259 "assigned_rate_limits": { 00:10:49.259 "rw_ios_per_sec": 0, 00:10:49.259 "rw_mbytes_per_sec": 0, 00:10:49.259 "r_mbytes_per_sec": 0, 00:10:49.259 "w_mbytes_per_sec": 0 00:10:49.259 }, 00:10:49.259 "claimed": false, 00:10:49.259 "zoned": false, 00:10:49.259 "supported_io_types": { 00:10:49.259 "read": true, 00:10:49.259 "write": true, 00:10:49.259 "unmap": true, 00:10:49.259 "flush": true, 00:10:49.259 "reset": true, 00:10:49.259 "nvme_admin": false, 00:10:49.259 "nvme_io": false, 00:10:49.259 "nvme_io_md": false, 00:10:49.259 "write_zeroes": true, 00:10:49.259 "zcopy": true, 00:10:49.259 "get_zone_info": false, 00:10:49.259 "zone_management": false, 00:10:49.259 "zone_append": false, 00:10:49.259 "compare": false, 00:10:49.259 "compare_and_write": false, 00:10:49.259 "abort": true, 00:10:49.259 "seek_hole": false, 00:10:49.259 "seek_data": false, 00:10:49.259 "copy": true, 00:10:49.259 "nvme_iov_md": false 00:10:49.259 }, 00:10:49.259 "memory_domains": [ 00:10:49.259 { 00:10:49.259 "dma_device_id": "system", 00:10:49.259 "dma_device_type": 1 00:10:49.259 }, 00:10:49.259 { 00:10:49.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.259 "dma_device_type": 2 00:10:49.259 } 00:10:49.259 ], 00:10:49.259 "driver_specific": {} 00:10:49.259 } 00:10:49.259 ] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 BaseBdev3 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.259 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.259 [ 00:10:49.259 { 00:10:49.259 "name": "BaseBdev3", 00:10:49.259 "aliases": [ 00:10:49.259 "1188d74b-e2b2-4a70-a928-6c9c629ec7ed" 00:10:49.259 ], 00:10:49.259 "product_name": "Malloc disk", 00:10:49.259 "block_size": 512, 00:10:49.259 "num_blocks": 65536, 00:10:49.259 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:49.260 "assigned_rate_limits": { 00:10:49.260 "rw_ios_per_sec": 0, 00:10:49.260 "rw_mbytes_per_sec": 0, 00:10:49.260 "r_mbytes_per_sec": 0, 00:10:49.260 "w_mbytes_per_sec": 0 00:10:49.260 }, 00:10:49.260 "claimed": false, 00:10:49.260 "zoned": false, 00:10:49.260 "supported_io_types": { 00:10:49.260 "read": true, 00:10:49.260 "write": true, 00:10:49.260 "unmap": true, 00:10:49.260 "flush": true, 00:10:49.260 "reset": true, 00:10:49.260 "nvme_admin": false, 00:10:49.260 "nvme_io": false, 00:10:49.260 "nvme_io_md": false, 00:10:49.260 "write_zeroes": true, 00:10:49.260 "zcopy": true, 00:10:49.260 "get_zone_info": false, 00:10:49.260 "zone_management": false, 00:10:49.260 "zone_append": false, 00:10:49.260 "compare": false, 00:10:49.260 "compare_and_write": false, 00:10:49.260 "abort": true, 00:10:49.260 "seek_hole": false, 00:10:49.260 "seek_data": false, 00:10:49.260 "copy": true, 00:10:49.260 "nvme_iov_md": false 00:10:49.260 }, 00:10:49.260 "memory_domains": [ 00:10:49.260 { 00:10:49.260 "dma_device_id": "system", 00:10:49.260 "dma_device_type": 1 00:10:49.260 }, 00:10:49.260 { 00:10:49.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.260 "dma_device_type": 2 00:10:49.260 } 00:10:49.260 ], 00:10:49.260 "driver_specific": {} 00:10:49.260 } 00:10:49.260 ] 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.260 [2024-11-27 09:47:50.365394] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.260 [2024-11-27 09:47:50.365491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.260 [2024-11-27 09:47:50.365538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.260 [2024-11-27 09:47:50.367680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.260 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.520 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.520 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.520 "name": "Existed_Raid", 00:10:49.520 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:49.520 "strip_size_kb": 64, 00:10:49.520 "state": "configuring", 00:10:49.520 "raid_level": "concat", 00:10:49.520 "superblock": true, 00:10:49.520 "num_base_bdevs": 3, 00:10:49.520 "num_base_bdevs_discovered": 2, 00:10:49.520 "num_base_bdevs_operational": 3, 00:10:49.520 "base_bdevs_list": [ 00:10:49.520 { 00:10:49.520 "name": "BaseBdev1", 00:10:49.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.520 "is_configured": false, 00:10:49.520 "data_offset": 0, 00:10:49.520 "data_size": 0 00:10:49.520 }, 00:10:49.520 { 00:10:49.520 "name": "BaseBdev2", 00:10:49.520 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:49.520 "is_configured": true, 00:10:49.520 "data_offset": 2048, 00:10:49.520 "data_size": 63488 00:10:49.520 }, 00:10:49.520 { 00:10:49.520 "name": "BaseBdev3", 00:10:49.520 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:49.520 "is_configured": true, 00:10:49.520 "data_offset": 2048, 00:10:49.520 "data_size": 63488 00:10:49.520 } 00:10:49.520 ] 00:10:49.520 }' 00:10:49.520 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.520 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.780 [2024-11-27 09:47:50.732833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.780 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.781 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.781 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.781 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.781 "name": "Existed_Raid", 00:10:49.781 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:49.781 "strip_size_kb": 64, 00:10:49.781 "state": "configuring", 00:10:49.781 "raid_level": "concat", 00:10:49.781 "superblock": true, 00:10:49.781 "num_base_bdevs": 3, 00:10:49.781 "num_base_bdevs_discovered": 1, 00:10:49.781 "num_base_bdevs_operational": 3, 00:10:49.781 "base_bdevs_list": [ 00:10:49.781 { 00:10:49.781 "name": "BaseBdev1", 00:10:49.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.781 "is_configured": false, 00:10:49.781 "data_offset": 0, 00:10:49.781 "data_size": 0 00:10:49.781 }, 00:10:49.781 { 00:10:49.781 "name": null, 00:10:49.781 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:49.781 "is_configured": false, 00:10:49.781 "data_offset": 0, 00:10:49.781 "data_size": 63488 00:10:49.781 }, 00:10:49.781 { 00:10:49.781 "name": "BaseBdev3", 00:10:49.781 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:49.781 "is_configured": true, 00:10:49.781 "data_offset": 2048, 00:10:49.781 "data_size": 63488 00:10:49.781 } 00:10:49.781 ] 00:10:49.781 }' 00:10:49.781 09:47:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.781 09:47:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.040 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.040 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.040 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.040 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.040 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 [2024-11-27 09:47:51.220409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.300 BaseBdev1 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.300 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.301 [ 00:10:50.301 { 00:10:50.301 "name": "BaseBdev1", 00:10:50.301 "aliases": [ 00:10:50.301 "3daef2b5-8491-403f-a260-655ebc1140b1" 00:10:50.301 ], 00:10:50.301 "product_name": "Malloc disk", 00:10:50.301 "block_size": 512, 00:10:50.301 "num_blocks": 65536, 00:10:50.301 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:50.301 "assigned_rate_limits": { 00:10:50.301 "rw_ios_per_sec": 0, 00:10:50.301 "rw_mbytes_per_sec": 0, 00:10:50.301 "r_mbytes_per_sec": 0, 00:10:50.301 "w_mbytes_per_sec": 0 00:10:50.301 }, 00:10:50.301 "claimed": true, 00:10:50.301 "claim_type": "exclusive_write", 00:10:50.301 "zoned": false, 00:10:50.301 "supported_io_types": { 00:10:50.301 "read": true, 00:10:50.301 "write": true, 00:10:50.301 "unmap": true, 00:10:50.301 "flush": true, 00:10:50.301 "reset": true, 00:10:50.301 "nvme_admin": false, 00:10:50.301 "nvme_io": false, 00:10:50.301 "nvme_io_md": false, 00:10:50.301 "write_zeroes": true, 00:10:50.301 "zcopy": true, 00:10:50.301 "get_zone_info": false, 00:10:50.301 "zone_management": false, 00:10:50.301 "zone_append": false, 00:10:50.301 "compare": false, 00:10:50.301 "compare_and_write": false, 00:10:50.301 "abort": true, 00:10:50.301 "seek_hole": false, 00:10:50.301 "seek_data": false, 00:10:50.301 "copy": true, 00:10:50.301 "nvme_iov_md": false 00:10:50.301 }, 00:10:50.301 "memory_domains": [ 00:10:50.301 { 00:10:50.301 "dma_device_id": "system", 00:10:50.301 "dma_device_type": 1 00:10:50.301 }, 00:10:50.301 { 00:10:50.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.301 "dma_device_type": 2 00:10:50.301 } 00:10:50.301 ], 00:10:50.301 "driver_specific": {} 00:10:50.301 } 00:10:50.301 ] 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.301 "name": "Existed_Raid", 00:10:50.301 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:50.301 "strip_size_kb": 64, 00:10:50.301 "state": "configuring", 00:10:50.301 "raid_level": "concat", 00:10:50.301 "superblock": true, 00:10:50.301 "num_base_bdevs": 3, 00:10:50.301 "num_base_bdevs_discovered": 2, 00:10:50.301 "num_base_bdevs_operational": 3, 00:10:50.301 "base_bdevs_list": [ 00:10:50.301 { 00:10:50.301 "name": "BaseBdev1", 00:10:50.301 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:50.301 "is_configured": true, 00:10:50.301 "data_offset": 2048, 00:10:50.301 "data_size": 63488 00:10:50.301 }, 00:10:50.301 { 00:10:50.301 "name": null, 00:10:50.301 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:50.301 "is_configured": false, 00:10:50.301 "data_offset": 0, 00:10:50.301 "data_size": 63488 00:10:50.301 }, 00:10:50.301 { 00:10:50.301 "name": "BaseBdev3", 00:10:50.301 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:50.301 "is_configured": true, 00:10:50.301 "data_offset": 2048, 00:10:50.301 "data_size": 63488 00:10:50.301 } 00:10:50.301 ] 00:10:50.301 }' 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.301 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.561 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.561 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.561 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.561 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.821 [2024-11-27 09:47:51.711645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.821 "name": "Existed_Raid", 00:10:50.821 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:50.821 "strip_size_kb": 64, 00:10:50.821 "state": "configuring", 00:10:50.821 "raid_level": "concat", 00:10:50.821 "superblock": true, 00:10:50.821 "num_base_bdevs": 3, 00:10:50.821 "num_base_bdevs_discovered": 1, 00:10:50.821 "num_base_bdevs_operational": 3, 00:10:50.821 "base_bdevs_list": [ 00:10:50.821 { 00:10:50.821 "name": "BaseBdev1", 00:10:50.821 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:50.821 "is_configured": true, 00:10:50.821 "data_offset": 2048, 00:10:50.821 "data_size": 63488 00:10:50.821 }, 00:10:50.821 { 00:10:50.821 "name": null, 00:10:50.821 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:50.821 "is_configured": false, 00:10:50.821 "data_offset": 0, 00:10:50.821 "data_size": 63488 00:10:50.821 }, 00:10:50.821 { 00:10:50.821 "name": null, 00:10:50.821 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:50.821 "is_configured": false, 00:10:50.821 "data_offset": 0, 00:10:50.821 "data_size": 63488 00:10:50.821 } 00:10:50.821 ] 00:10:50.821 }' 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.821 09:47:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.081 [2024-11-27 09:47:52.174881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.081 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.342 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.342 "name": "Existed_Raid", 00:10:51.342 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:51.342 "strip_size_kb": 64, 00:10:51.342 "state": "configuring", 00:10:51.342 "raid_level": "concat", 00:10:51.342 "superblock": true, 00:10:51.342 "num_base_bdevs": 3, 00:10:51.342 "num_base_bdevs_discovered": 2, 00:10:51.342 "num_base_bdevs_operational": 3, 00:10:51.342 "base_bdevs_list": [ 00:10:51.342 { 00:10:51.342 "name": "BaseBdev1", 00:10:51.342 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:51.342 "is_configured": true, 00:10:51.342 "data_offset": 2048, 00:10:51.342 "data_size": 63488 00:10:51.342 }, 00:10:51.342 { 00:10:51.342 "name": null, 00:10:51.342 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:51.342 "is_configured": false, 00:10:51.342 "data_offset": 0, 00:10:51.342 "data_size": 63488 00:10:51.342 }, 00:10:51.342 { 00:10:51.342 "name": "BaseBdev3", 00:10:51.342 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:51.342 "is_configured": true, 00:10:51.342 "data_offset": 2048, 00:10:51.342 "data_size": 63488 00:10:51.342 } 00:10:51.342 ] 00:10:51.342 }' 00:10:51.342 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.342 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.602 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.602 [2024-11-27 09:47:52.634163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.862 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.862 "name": "Existed_Raid", 00:10:51.862 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:51.862 "strip_size_kb": 64, 00:10:51.862 "state": "configuring", 00:10:51.862 "raid_level": "concat", 00:10:51.862 "superblock": true, 00:10:51.862 "num_base_bdevs": 3, 00:10:51.862 "num_base_bdevs_discovered": 1, 00:10:51.862 "num_base_bdevs_operational": 3, 00:10:51.862 "base_bdevs_list": [ 00:10:51.862 { 00:10:51.862 "name": null, 00:10:51.862 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:51.862 "is_configured": false, 00:10:51.862 "data_offset": 0, 00:10:51.862 "data_size": 63488 00:10:51.862 }, 00:10:51.862 { 00:10:51.862 "name": null, 00:10:51.862 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:51.862 "is_configured": false, 00:10:51.862 "data_offset": 0, 00:10:51.862 "data_size": 63488 00:10:51.862 }, 00:10:51.862 { 00:10:51.862 "name": "BaseBdev3", 00:10:51.862 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:51.863 "is_configured": true, 00:10:51.863 "data_offset": 2048, 00:10:51.863 "data_size": 63488 00:10:51.863 } 00:10:51.863 ] 00:10:51.863 }' 00:10:51.863 09:47:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.863 09:47:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.122 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.122 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.122 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.122 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.122 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.382 [2024-11-27 09:47:53.260367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.382 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.383 "name": "Existed_Raid", 00:10:52.383 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:52.383 "strip_size_kb": 64, 00:10:52.383 "state": "configuring", 00:10:52.383 "raid_level": "concat", 00:10:52.383 "superblock": true, 00:10:52.383 "num_base_bdevs": 3, 00:10:52.383 "num_base_bdevs_discovered": 2, 00:10:52.383 "num_base_bdevs_operational": 3, 00:10:52.383 "base_bdevs_list": [ 00:10:52.383 { 00:10:52.383 "name": null, 00:10:52.383 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:52.383 "is_configured": false, 00:10:52.383 "data_offset": 0, 00:10:52.383 "data_size": 63488 00:10:52.383 }, 00:10:52.383 { 00:10:52.383 "name": "BaseBdev2", 00:10:52.383 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:52.383 "is_configured": true, 00:10:52.383 "data_offset": 2048, 00:10:52.383 "data_size": 63488 00:10:52.383 }, 00:10:52.383 { 00:10:52.383 "name": "BaseBdev3", 00:10:52.383 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:52.383 "is_configured": true, 00:10:52.383 "data_offset": 2048, 00:10:52.383 "data_size": 63488 00:10:52.383 } 00:10:52.383 ] 00:10:52.383 }' 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.383 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.642 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.643 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.643 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:52.643 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3daef2b5-8491-403f-a260-655ebc1140b1 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.903 [2024-11-27 09:47:53.825758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:52.903 [2024-11-27 09:47:53.826139] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.903 [2024-11-27 09:47:53.826198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:52.903 [2024-11-27 09:47:53.826512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:52.903 NewBaseBdev 00:10:52.903 [2024-11-27 09:47:53.826737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.903 [2024-11-27 09:47:53.826750] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:52.903 [2024-11-27 09:47:53.826906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.903 [ 00:10:52.903 { 00:10:52.903 "name": "NewBaseBdev", 00:10:52.903 "aliases": [ 00:10:52.903 "3daef2b5-8491-403f-a260-655ebc1140b1" 00:10:52.903 ], 00:10:52.903 "product_name": "Malloc disk", 00:10:52.903 "block_size": 512, 00:10:52.903 "num_blocks": 65536, 00:10:52.903 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:52.903 "assigned_rate_limits": { 00:10:52.903 "rw_ios_per_sec": 0, 00:10:52.903 "rw_mbytes_per_sec": 0, 00:10:52.903 "r_mbytes_per_sec": 0, 00:10:52.903 "w_mbytes_per_sec": 0 00:10:52.903 }, 00:10:52.903 "claimed": true, 00:10:52.903 "claim_type": "exclusive_write", 00:10:52.903 "zoned": false, 00:10:52.903 "supported_io_types": { 00:10:52.903 "read": true, 00:10:52.903 "write": true, 00:10:52.903 "unmap": true, 00:10:52.903 "flush": true, 00:10:52.903 "reset": true, 00:10:52.903 "nvme_admin": false, 00:10:52.903 "nvme_io": false, 00:10:52.903 "nvme_io_md": false, 00:10:52.903 "write_zeroes": true, 00:10:52.903 "zcopy": true, 00:10:52.903 "get_zone_info": false, 00:10:52.903 "zone_management": false, 00:10:52.903 "zone_append": false, 00:10:52.903 "compare": false, 00:10:52.903 "compare_and_write": false, 00:10:52.903 "abort": true, 00:10:52.903 "seek_hole": false, 00:10:52.903 "seek_data": false, 00:10:52.903 "copy": true, 00:10:52.903 "nvme_iov_md": false 00:10:52.903 }, 00:10:52.903 "memory_domains": [ 00:10:52.903 { 00:10:52.903 "dma_device_id": "system", 00:10:52.903 "dma_device_type": 1 00:10:52.903 }, 00:10:52.903 { 00:10:52.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.903 "dma_device_type": 2 00:10:52.903 } 00:10:52.903 ], 00:10:52.903 "driver_specific": {} 00:10:52.903 } 00:10:52.903 ] 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.903 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.904 "name": "Existed_Raid", 00:10:52.904 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:52.904 "strip_size_kb": 64, 00:10:52.904 "state": "online", 00:10:52.904 "raid_level": "concat", 00:10:52.904 "superblock": true, 00:10:52.904 "num_base_bdevs": 3, 00:10:52.904 "num_base_bdevs_discovered": 3, 00:10:52.904 "num_base_bdevs_operational": 3, 00:10:52.904 "base_bdevs_list": [ 00:10:52.904 { 00:10:52.904 "name": "NewBaseBdev", 00:10:52.904 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:52.904 "is_configured": true, 00:10:52.904 "data_offset": 2048, 00:10:52.904 "data_size": 63488 00:10:52.904 }, 00:10:52.904 { 00:10:52.904 "name": "BaseBdev2", 00:10:52.904 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:52.904 "is_configured": true, 00:10:52.904 "data_offset": 2048, 00:10:52.904 "data_size": 63488 00:10:52.904 }, 00:10:52.904 { 00:10:52.904 "name": "BaseBdev3", 00:10:52.904 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:52.904 "is_configured": true, 00:10:52.904 "data_offset": 2048, 00:10:52.904 "data_size": 63488 00:10:52.904 } 00:10:52.904 ] 00:10:52.904 }' 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.904 09:47:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.474 [2024-11-27 09:47:54.325334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.474 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.474 "name": "Existed_Raid", 00:10:53.474 "aliases": [ 00:10:53.474 "140e9cf5-0e3e-4884-8d87-be686fc95fb8" 00:10:53.474 ], 00:10:53.474 "product_name": "Raid Volume", 00:10:53.474 "block_size": 512, 00:10:53.474 "num_blocks": 190464, 00:10:53.474 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:53.474 "assigned_rate_limits": { 00:10:53.474 "rw_ios_per_sec": 0, 00:10:53.474 "rw_mbytes_per_sec": 0, 00:10:53.474 "r_mbytes_per_sec": 0, 00:10:53.474 "w_mbytes_per_sec": 0 00:10:53.474 }, 00:10:53.474 "claimed": false, 00:10:53.474 "zoned": false, 00:10:53.474 "supported_io_types": { 00:10:53.474 "read": true, 00:10:53.474 "write": true, 00:10:53.474 "unmap": true, 00:10:53.474 "flush": true, 00:10:53.474 "reset": true, 00:10:53.474 "nvme_admin": false, 00:10:53.474 "nvme_io": false, 00:10:53.474 "nvme_io_md": false, 00:10:53.474 "write_zeroes": true, 00:10:53.474 "zcopy": false, 00:10:53.474 "get_zone_info": false, 00:10:53.474 "zone_management": false, 00:10:53.474 "zone_append": false, 00:10:53.474 "compare": false, 00:10:53.474 "compare_and_write": false, 00:10:53.474 "abort": false, 00:10:53.474 "seek_hole": false, 00:10:53.474 "seek_data": false, 00:10:53.474 "copy": false, 00:10:53.474 "nvme_iov_md": false 00:10:53.474 }, 00:10:53.474 "memory_domains": [ 00:10:53.474 { 00:10:53.474 "dma_device_id": "system", 00:10:53.474 "dma_device_type": 1 00:10:53.474 }, 00:10:53.474 { 00:10:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.474 "dma_device_type": 2 00:10:53.474 }, 00:10:53.474 { 00:10:53.474 "dma_device_id": "system", 00:10:53.474 "dma_device_type": 1 00:10:53.474 }, 00:10:53.474 { 00:10:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.474 "dma_device_type": 2 00:10:53.474 }, 00:10:53.474 { 00:10:53.474 "dma_device_id": "system", 00:10:53.474 "dma_device_type": 1 00:10:53.474 }, 00:10:53.474 { 00:10:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.474 "dma_device_type": 2 00:10:53.474 } 00:10:53.474 ], 00:10:53.474 "driver_specific": { 00:10:53.474 "raid": { 00:10:53.474 "uuid": "140e9cf5-0e3e-4884-8d87-be686fc95fb8", 00:10:53.474 "strip_size_kb": 64, 00:10:53.474 "state": "online", 00:10:53.474 "raid_level": "concat", 00:10:53.474 "superblock": true, 00:10:53.474 "num_base_bdevs": 3, 00:10:53.474 "num_base_bdevs_discovered": 3, 00:10:53.474 "num_base_bdevs_operational": 3, 00:10:53.474 "base_bdevs_list": [ 00:10:53.474 { 00:10:53.474 "name": "NewBaseBdev", 00:10:53.474 "uuid": "3daef2b5-8491-403f-a260-655ebc1140b1", 00:10:53.474 "is_configured": true, 00:10:53.474 "data_offset": 2048, 00:10:53.474 "data_size": 63488 00:10:53.474 }, 00:10:53.475 { 00:10:53.475 "name": "BaseBdev2", 00:10:53.475 "uuid": "7965add6-b66e-460a-977a-f5fb872eaf5a", 00:10:53.475 "is_configured": true, 00:10:53.475 "data_offset": 2048, 00:10:53.475 "data_size": 63488 00:10:53.475 }, 00:10:53.475 { 00:10:53.475 "name": "BaseBdev3", 00:10:53.475 "uuid": "1188d74b-e2b2-4a70-a928-6c9c629ec7ed", 00:10:53.475 "is_configured": true, 00:10:53.475 "data_offset": 2048, 00:10:53.475 "data_size": 63488 00:10:53.475 } 00:10:53.475 ] 00:10:53.475 } 00:10:53.475 } 00:10:53.475 }' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.475 BaseBdev2 00:10:53.475 BaseBdev3' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.475 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.734 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.734 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.734 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.735 [2024-11-27 09:47:54.620454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.735 [2024-11-27 09:47:54.620532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.735 [2024-11-27 09:47:54.620666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.735 [2024-11-27 09:47:54.620762] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.735 [2024-11-27 09:47:54.620812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66495 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66495 ']' 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66495 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66495 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.735 killing process with pid 66495 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66495' 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66495 00:10:53.735 [2024-11-27 09:47:54.659167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.735 09:47:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66495 00:10:53.995 [2024-11-27 09:47:54.984216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.377 09:47:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.377 00:10:55.377 real 0m10.587s 00:10:55.377 user 0m16.506s 00:10:55.377 sys 0m1.944s 00:10:55.377 ************************************ 00:10:55.377 END TEST raid_state_function_test_sb 00:10:55.377 ************************************ 00:10:55.377 09:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.377 09:47:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.377 09:47:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:55.377 09:47:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.378 09:47:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.378 09:47:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.378 ************************************ 00:10:55.378 START TEST raid_superblock_test 00:10:55.378 ************************************ 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67115 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67115 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67115 ']' 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.378 09:47:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.378 [2024-11-27 09:47:56.397855] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:10:55.378 [2024-11-27 09:47:56.398018] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67115 ] 00:10:55.638 [2024-11-27 09:47:56.579892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.638 [2024-11-27 09:47:56.722318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.904 [2024-11-27 09:47:56.951380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.904 [2024-11-27 09:47:56.951453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.175 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 malloc1 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 [2024-11-27 09:47:57.327042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.439 [2024-11-27 09:47:57.327202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.439 [2024-11-27 09:47:57.327249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:56.439 [2024-11-27 09:47:57.327262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.439 [2024-11-27 09:47:57.329932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.439 [2024-11-27 09:47:57.329977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.439 pt1 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 malloc2 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 [2024-11-27 09:47:57.394717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.439 [2024-11-27 09:47:57.394898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.439 [2024-11-27 09:47:57.394962] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:56.439 [2024-11-27 09:47:57.395023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.439 [2024-11-27 09:47:57.397870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.439 [2024-11-27 09:47:57.397961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.439 pt2 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 malloc3 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 [2024-11-27 09:47:57.476000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.439 [2024-11-27 09:47:57.476206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.439 [2024-11-27 09:47:57.476259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:56.439 [2024-11-27 09:47:57.476318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.439 [2024-11-27 09:47:57.479065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.439 [2024-11-27 09:47:57.479137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.439 pt3 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.439 [2024-11-27 09:47:57.488058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.439 [2024-11-27 09:47:57.490312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.439 [2024-11-27 09:47:57.490434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.439 [2024-11-27 09:47:57.490624] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:56.439 [2024-11-27 09:47:57.490679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:56.439 [2024-11-27 09:47:57.491007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:56.439 [2024-11-27 09:47:57.491260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:56.439 [2024-11-27 09:47:57.491301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:56.439 [2024-11-27 09:47:57.491508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.439 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.440 "name": "raid_bdev1", 00:10:56.440 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:56.440 "strip_size_kb": 64, 00:10:56.440 "state": "online", 00:10:56.440 "raid_level": "concat", 00:10:56.440 "superblock": true, 00:10:56.440 "num_base_bdevs": 3, 00:10:56.440 "num_base_bdevs_discovered": 3, 00:10:56.440 "num_base_bdevs_operational": 3, 00:10:56.440 "base_bdevs_list": [ 00:10:56.440 { 00:10:56.440 "name": "pt1", 00:10:56.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 2048, 00:10:56.440 "data_size": 63488 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "name": "pt2", 00:10:56.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 2048, 00:10:56.440 "data_size": 63488 00:10:56.440 }, 00:10:56.440 { 00:10:56.440 "name": "pt3", 00:10:56.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.440 "is_configured": true, 00:10:56.440 "data_offset": 2048, 00:10:56.440 "data_size": 63488 00:10:56.440 } 00:10:56.440 ] 00:10:56.440 }' 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.440 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.010 [2024-11-27 09:47:57.955552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.010 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:57.010 "name": "raid_bdev1", 00:10:57.010 "aliases": [ 00:10:57.010 "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f" 00:10:57.010 ], 00:10:57.010 "product_name": "Raid Volume", 00:10:57.010 "block_size": 512, 00:10:57.010 "num_blocks": 190464, 00:10:57.010 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:57.010 "assigned_rate_limits": { 00:10:57.010 "rw_ios_per_sec": 0, 00:10:57.010 "rw_mbytes_per_sec": 0, 00:10:57.010 "r_mbytes_per_sec": 0, 00:10:57.010 "w_mbytes_per_sec": 0 00:10:57.010 }, 00:10:57.010 "claimed": false, 00:10:57.010 "zoned": false, 00:10:57.010 "supported_io_types": { 00:10:57.010 "read": true, 00:10:57.010 "write": true, 00:10:57.010 "unmap": true, 00:10:57.010 "flush": true, 00:10:57.010 "reset": true, 00:10:57.010 "nvme_admin": false, 00:10:57.010 "nvme_io": false, 00:10:57.010 "nvme_io_md": false, 00:10:57.010 "write_zeroes": true, 00:10:57.010 "zcopy": false, 00:10:57.010 "get_zone_info": false, 00:10:57.010 "zone_management": false, 00:10:57.010 "zone_append": false, 00:10:57.010 "compare": false, 00:10:57.010 "compare_and_write": false, 00:10:57.010 "abort": false, 00:10:57.010 "seek_hole": false, 00:10:57.010 "seek_data": false, 00:10:57.010 "copy": false, 00:10:57.010 "nvme_iov_md": false 00:10:57.010 }, 00:10:57.010 "memory_domains": [ 00:10:57.010 { 00:10:57.010 "dma_device_id": "system", 00:10:57.010 "dma_device_type": 1 00:10:57.010 }, 00:10:57.010 { 00:10:57.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.010 "dma_device_type": 2 00:10:57.010 }, 00:10:57.010 { 00:10:57.010 "dma_device_id": "system", 00:10:57.010 "dma_device_type": 1 00:10:57.010 }, 00:10:57.010 { 00:10:57.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.010 "dma_device_type": 2 00:10:57.010 }, 00:10:57.010 { 00:10:57.010 "dma_device_id": "system", 00:10:57.010 "dma_device_type": 1 00:10:57.010 }, 00:10:57.010 { 00:10:57.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.010 "dma_device_type": 2 00:10:57.010 } 00:10:57.010 ], 00:10:57.010 "driver_specific": { 00:10:57.010 "raid": { 00:10:57.010 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:57.010 "strip_size_kb": 64, 00:10:57.010 "state": "online", 00:10:57.010 "raid_level": "concat", 00:10:57.010 "superblock": true, 00:10:57.010 "num_base_bdevs": 3, 00:10:57.010 "num_base_bdevs_discovered": 3, 00:10:57.010 "num_base_bdevs_operational": 3, 00:10:57.010 "base_bdevs_list": [ 00:10:57.011 { 00:10:57.011 "name": "pt1", 00:10:57.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.011 "is_configured": true, 00:10:57.011 "data_offset": 2048, 00:10:57.011 "data_size": 63488 00:10:57.011 }, 00:10:57.011 { 00:10:57.011 "name": "pt2", 00:10:57.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.011 "is_configured": true, 00:10:57.011 "data_offset": 2048, 00:10:57.011 "data_size": 63488 00:10:57.011 }, 00:10:57.011 { 00:10:57.011 "name": "pt3", 00:10:57.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.011 "is_configured": true, 00:10:57.011 "data_offset": 2048, 00:10:57.011 "data_size": 63488 00:10:57.011 } 00:10:57.011 ] 00:10:57.011 } 00:10:57.011 } 00:10:57.011 }' 00:10:57.011 09:47:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:57.011 pt2 00:10:57.011 pt3' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.011 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 [2024-11-27 09:47:58.183091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=30ca1816-92f9-4cf1-aecd-b8c2b5186f9f 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 30ca1816-92f9-4cf1-aecd-b8c2b5186f9f ']' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 [2024-11-27 09:47:58.230731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.272 [2024-11-27 09:47:58.230807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.272 [2024-11-27 09:47:58.230924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.272 [2024-11-27 09:47:58.231052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.272 [2024-11-27 09:47:58.231122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.272 [2024-11-27 09:47:58.382602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.272 [2024-11-27 09:47:58.385042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.272 [2024-11-27 09:47:58.385160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.272 [2024-11-27 09:47:58.385275] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:57.272 [2024-11-27 09:47:58.385406] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:57.272 [2024-11-27 09:47:58.385488] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:57.272 [2024-11-27 09:47:58.385568] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.272 [2024-11-27 09:47:58.385606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:57.272 request: 00:10:57.272 { 00:10:57.272 "name": "raid_bdev1", 00:10:57.272 "raid_level": "concat", 00:10:57.272 "base_bdevs": [ 00:10:57.272 "malloc1", 00:10:57.272 "malloc2", 00:10:57.272 "malloc3" 00:10:57.272 ], 00:10:57.272 "strip_size_kb": 64, 00:10:57.272 "superblock": false, 00:10:57.272 "method": "bdev_raid_create", 00:10:57.272 "req_id": 1 00:10:57.272 } 00:10:57.272 Got JSON-RPC error response 00:10:57.272 response: 00:10:57.272 { 00:10:57.272 "code": -17, 00:10:57.272 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.272 } 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.272 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.533 [2024-11-27 09:47:58.450343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.533 [2024-11-27 09:47:58.450432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.533 [2024-11-27 09:47:58.450471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:57.533 [2024-11-27 09:47:58.450501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.533 [2024-11-27 09:47:58.453022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.533 [2024-11-27 09:47:58.453091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.533 [2024-11-27 09:47:58.453226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.533 [2024-11-27 09:47:58.453329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.533 pt1 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.533 "name": "raid_bdev1", 00:10:57.533 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:57.533 "strip_size_kb": 64, 00:10:57.533 "state": "configuring", 00:10:57.533 "raid_level": "concat", 00:10:57.533 "superblock": true, 00:10:57.533 "num_base_bdevs": 3, 00:10:57.533 "num_base_bdevs_discovered": 1, 00:10:57.533 "num_base_bdevs_operational": 3, 00:10:57.533 "base_bdevs_list": [ 00:10:57.533 { 00:10:57.533 "name": "pt1", 00:10:57.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.533 "is_configured": true, 00:10:57.533 "data_offset": 2048, 00:10:57.533 "data_size": 63488 00:10:57.533 }, 00:10:57.533 { 00:10:57.533 "name": null, 00:10:57.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.533 "is_configured": false, 00:10:57.533 "data_offset": 2048, 00:10:57.533 "data_size": 63488 00:10:57.533 }, 00:10:57.533 { 00:10:57.533 "name": null, 00:10:57.533 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.533 "is_configured": false, 00:10:57.533 "data_offset": 2048, 00:10:57.533 "data_size": 63488 00:10:57.533 } 00:10:57.533 ] 00:10:57.533 }' 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.533 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.814 [2024-11-27 09:47:58.901617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.814 [2024-11-27 09:47:58.901755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.814 [2024-11-27 09:47:58.901811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:57.814 [2024-11-27 09:47:58.901858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.814 [2024-11-27 09:47:58.902440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.814 [2024-11-27 09:47:58.902507] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.814 [2024-11-27 09:47:58.902669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:57.814 [2024-11-27 09:47:58.902745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.814 pt2 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.814 [2024-11-27 09:47:58.913556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.814 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.074 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.074 "name": "raid_bdev1", 00:10:58.074 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:58.074 "strip_size_kb": 64, 00:10:58.074 "state": "configuring", 00:10:58.074 "raid_level": "concat", 00:10:58.074 "superblock": true, 00:10:58.074 "num_base_bdevs": 3, 00:10:58.074 "num_base_bdevs_discovered": 1, 00:10:58.074 "num_base_bdevs_operational": 3, 00:10:58.074 "base_bdevs_list": [ 00:10:58.074 { 00:10:58.074 "name": "pt1", 00:10:58.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.074 "is_configured": true, 00:10:58.074 "data_offset": 2048, 00:10:58.074 "data_size": 63488 00:10:58.074 }, 00:10:58.074 { 00:10:58.074 "name": null, 00:10:58.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.074 "is_configured": false, 00:10:58.074 "data_offset": 0, 00:10:58.074 "data_size": 63488 00:10:58.074 }, 00:10:58.074 { 00:10:58.074 "name": null, 00:10:58.074 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.074 "is_configured": false, 00:10:58.074 "data_offset": 2048, 00:10:58.074 "data_size": 63488 00:10:58.074 } 00:10:58.074 ] 00:10:58.074 }' 00:10:58.074 09:47:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.074 09:47:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.334 [2024-11-27 09:47:59.300886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.334 [2024-11-27 09:47:59.301025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.334 [2024-11-27 09:47:59.301065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:58.334 [2024-11-27 09:47:59.301168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.334 [2024-11-27 09:47:59.301744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.334 [2024-11-27 09:47:59.301813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.334 [2024-11-27 09:47:59.301953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.334 [2024-11-27 09:47:59.302029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.334 pt2 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.334 [2024-11-27 09:47:59.312823] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.334 [2024-11-27 09:47:59.312875] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.334 [2024-11-27 09:47:59.312889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.334 [2024-11-27 09:47:59.312900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.334 [2024-11-27 09:47:59.313304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.334 [2024-11-27 09:47:59.313334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.334 [2024-11-27 09:47:59.313411] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:58.334 [2024-11-27 09:47:59.313436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.334 [2024-11-27 09:47:59.313570] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.334 [2024-11-27 09:47:59.313582] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:58.334 [2024-11-27 09:47:59.313853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:58.334 [2024-11-27 09:47:59.314063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.334 [2024-11-27 09:47:59.314080] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.334 [2024-11-27 09:47:59.314228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.334 pt3 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.334 "name": "raid_bdev1", 00:10:58.334 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:58.334 "strip_size_kb": 64, 00:10:58.334 "state": "online", 00:10:58.334 "raid_level": "concat", 00:10:58.334 "superblock": true, 00:10:58.334 "num_base_bdevs": 3, 00:10:58.334 "num_base_bdevs_discovered": 3, 00:10:58.334 "num_base_bdevs_operational": 3, 00:10:58.334 "base_bdevs_list": [ 00:10:58.334 { 00:10:58.334 "name": "pt1", 00:10:58.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.334 "is_configured": true, 00:10:58.334 "data_offset": 2048, 00:10:58.334 "data_size": 63488 00:10:58.334 }, 00:10:58.334 { 00:10:58.334 "name": "pt2", 00:10:58.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.334 "is_configured": true, 00:10:58.334 "data_offset": 2048, 00:10:58.334 "data_size": 63488 00:10:58.334 }, 00:10:58.334 { 00:10:58.334 "name": "pt3", 00:10:58.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.334 "is_configured": true, 00:10:58.334 "data_offset": 2048, 00:10:58.334 "data_size": 63488 00:10:58.334 } 00:10:58.334 ] 00:10:58.334 }' 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.334 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.903 [2024-11-27 09:47:59.744564] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.903 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.903 "name": "raid_bdev1", 00:10:58.903 "aliases": [ 00:10:58.903 "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f" 00:10:58.903 ], 00:10:58.903 "product_name": "Raid Volume", 00:10:58.903 "block_size": 512, 00:10:58.903 "num_blocks": 190464, 00:10:58.903 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:58.903 "assigned_rate_limits": { 00:10:58.903 "rw_ios_per_sec": 0, 00:10:58.903 "rw_mbytes_per_sec": 0, 00:10:58.903 "r_mbytes_per_sec": 0, 00:10:58.903 "w_mbytes_per_sec": 0 00:10:58.903 }, 00:10:58.903 "claimed": false, 00:10:58.903 "zoned": false, 00:10:58.903 "supported_io_types": { 00:10:58.903 "read": true, 00:10:58.903 "write": true, 00:10:58.903 "unmap": true, 00:10:58.903 "flush": true, 00:10:58.903 "reset": true, 00:10:58.903 "nvme_admin": false, 00:10:58.903 "nvme_io": false, 00:10:58.903 "nvme_io_md": false, 00:10:58.903 "write_zeroes": true, 00:10:58.903 "zcopy": false, 00:10:58.903 "get_zone_info": false, 00:10:58.903 "zone_management": false, 00:10:58.903 "zone_append": false, 00:10:58.903 "compare": false, 00:10:58.903 "compare_and_write": false, 00:10:58.903 "abort": false, 00:10:58.903 "seek_hole": false, 00:10:58.903 "seek_data": false, 00:10:58.903 "copy": false, 00:10:58.903 "nvme_iov_md": false 00:10:58.903 }, 00:10:58.903 "memory_domains": [ 00:10:58.903 { 00:10:58.903 "dma_device_id": "system", 00:10:58.903 "dma_device_type": 1 00:10:58.903 }, 00:10:58.903 { 00:10:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.903 "dma_device_type": 2 00:10:58.903 }, 00:10:58.903 { 00:10:58.903 "dma_device_id": "system", 00:10:58.903 "dma_device_type": 1 00:10:58.903 }, 00:10:58.903 { 00:10:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.903 "dma_device_type": 2 00:10:58.903 }, 00:10:58.903 { 00:10:58.903 "dma_device_id": "system", 00:10:58.903 "dma_device_type": 1 00:10:58.903 }, 00:10:58.903 { 00:10:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.903 "dma_device_type": 2 00:10:58.903 } 00:10:58.903 ], 00:10:58.903 "driver_specific": { 00:10:58.903 "raid": { 00:10:58.903 "uuid": "30ca1816-92f9-4cf1-aecd-b8c2b5186f9f", 00:10:58.903 "strip_size_kb": 64, 00:10:58.903 "state": "online", 00:10:58.903 "raid_level": "concat", 00:10:58.903 "superblock": true, 00:10:58.903 "num_base_bdevs": 3, 00:10:58.903 "num_base_bdevs_discovered": 3, 00:10:58.904 "num_base_bdevs_operational": 3, 00:10:58.904 "base_bdevs_list": [ 00:10:58.904 { 00:10:58.904 "name": "pt1", 00:10:58.904 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.904 "is_configured": true, 00:10:58.904 "data_offset": 2048, 00:10:58.904 "data_size": 63488 00:10:58.904 }, 00:10:58.904 { 00:10:58.904 "name": "pt2", 00:10:58.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.904 "is_configured": true, 00:10:58.904 "data_offset": 2048, 00:10:58.904 "data_size": 63488 00:10:58.904 }, 00:10:58.904 { 00:10:58.904 "name": "pt3", 00:10:58.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.904 "is_configured": true, 00:10:58.904 "data_offset": 2048, 00:10:58.904 "data_size": 63488 00:10:58.904 } 00:10:58.904 ] 00:10:58.904 } 00:10:58.904 } 00:10:58.904 }' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.904 pt2 00:10:58.904 pt3' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.904 09:47:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.904 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.904 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.904 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.904 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.904 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.904 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:58.904 [2024-11-27 09:48:00.023959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 30ca1816-92f9-4cf1-aecd-b8c2b5186f9f '!=' 30ca1816-92f9-4cf1-aecd-b8c2b5186f9f ']' 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67115 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67115 ']' 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67115 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67115 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.164 killing process with pid 67115 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67115' 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67115 00:10:59.164 [2024-11-27 09:48:00.109228] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.164 [2024-11-27 09:48:00.109338] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.164 09:48:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67115 00:10:59.164 [2024-11-27 09:48:00.109407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.164 [2024-11-27 09:48:00.109421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:59.423 [2024-11-27 09:48:00.437644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.805 09:48:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:00.805 00:11:00.805 real 0m5.373s 00:11:00.805 user 0m7.425s 00:11:00.805 sys 0m1.046s 00:11:00.805 ************************************ 00:11:00.805 END TEST raid_superblock_test 00:11:00.805 ************************************ 00:11:00.805 09:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.805 09:48:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 09:48:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:00.805 09:48:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.805 09:48:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.805 09:48:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 ************************************ 00:11:00.805 START TEST raid_read_error_test 00:11:00.805 ************************************ 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O0YZ7uKoWQ 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67368 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67368 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67368 ']' 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.805 09:48:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.805 [2024-11-27 09:48:01.854743] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:00.805 [2024-11-27 09:48:01.854917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67368 ] 00:11:01.066 [2024-11-27 09:48:02.037978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.066 [2024-11-27 09:48:02.177376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.326 [2024-11-27 09:48:02.407613] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.326 [2024-11-27 09:48:02.407667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.587 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.587 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.587 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.587 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.587 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.587 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.847 BaseBdev1_malloc 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.847 true 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.847 [2024-11-27 09:48:02.785522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.847 [2024-11-27 09:48:02.785662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.847 [2024-11-27 09:48:02.785709] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:01.847 [2024-11-27 09:48:02.785780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.847 [2024-11-27 09:48:02.788374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.847 [2024-11-27 09:48:02.788474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.847 BaseBdev1 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.847 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.847 BaseBdev2_malloc 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 true 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 [2024-11-27 09:48:02.859244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:01.848 [2024-11-27 09:48:02.859361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.848 [2024-11-27 09:48:02.859403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.848 [2024-11-27 09:48:02.859449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.848 [2024-11-27 09:48:02.862147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.848 [2024-11-27 09:48:02.862229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.848 BaseBdev2 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 BaseBdev3_malloc 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 true 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 [2024-11-27 09:48:02.948862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:01.848 [2024-11-27 09:48:02.948973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.848 [2024-11-27 09:48:02.949037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:01.848 [2024-11-27 09:48:02.949114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.848 [2024-11-27 09:48:02.951640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.848 [2024-11-27 09:48:02.951723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:01.848 BaseBdev3 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 [2024-11-27 09:48:02.960937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.848 [2024-11-27 09:48:02.963104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.848 [2024-11-27 09:48:02.963269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.848 [2024-11-27 09:48:02.963576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:01.848 [2024-11-27 09:48:02.963633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:01.848 [2024-11-27 09:48:02.963981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:01.848 [2024-11-27 09:48:02.964248] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:01.848 [2024-11-27 09:48:02.964306] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:01.848 [2024-11-27 09:48:02.964536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.848 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.108 09:48:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.108 09:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.108 "name": "raid_bdev1", 00:11:02.108 "uuid": "c165d94c-7897-47f9-8511-c136b77be62e", 00:11:02.108 "strip_size_kb": 64, 00:11:02.108 "state": "online", 00:11:02.108 "raid_level": "concat", 00:11:02.108 "superblock": true, 00:11:02.108 "num_base_bdevs": 3, 00:11:02.108 "num_base_bdevs_discovered": 3, 00:11:02.108 "num_base_bdevs_operational": 3, 00:11:02.108 "base_bdevs_list": [ 00:11:02.108 { 00:11:02.108 "name": "BaseBdev1", 00:11:02.108 "uuid": "d56e0e35-69cc-5970-a1b1-71ad33ed23cc", 00:11:02.108 "is_configured": true, 00:11:02.108 "data_offset": 2048, 00:11:02.108 "data_size": 63488 00:11:02.108 }, 00:11:02.108 { 00:11:02.108 "name": "BaseBdev2", 00:11:02.108 "uuid": "c5a4ad65-5ce9-58f1-8cce-ee02f86436a8", 00:11:02.108 "is_configured": true, 00:11:02.108 "data_offset": 2048, 00:11:02.108 "data_size": 63488 00:11:02.108 }, 00:11:02.108 { 00:11:02.108 "name": "BaseBdev3", 00:11:02.108 "uuid": "cda11553-6b61-58c1-8719-7c1563f3fc28", 00:11:02.108 "is_configured": true, 00:11:02.108 "data_offset": 2048, 00:11:02.108 "data_size": 63488 00:11:02.108 } 00:11:02.108 ] 00:11:02.108 }' 00:11:02.108 09:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.108 09:48:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.368 09:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:02.368 09:48:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.629 [2024-11-27 09:48:03.505363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.569 "name": "raid_bdev1", 00:11:03.569 "uuid": "c165d94c-7897-47f9-8511-c136b77be62e", 00:11:03.569 "strip_size_kb": 64, 00:11:03.569 "state": "online", 00:11:03.569 "raid_level": "concat", 00:11:03.569 "superblock": true, 00:11:03.569 "num_base_bdevs": 3, 00:11:03.569 "num_base_bdevs_discovered": 3, 00:11:03.569 "num_base_bdevs_operational": 3, 00:11:03.569 "base_bdevs_list": [ 00:11:03.569 { 00:11:03.569 "name": "BaseBdev1", 00:11:03.569 "uuid": "d56e0e35-69cc-5970-a1b1-71ad33ed23cc", 00:11:03.569 "is_configured": true, 00:11:03.569 "data_offset": 2048, 00:11:03.569 "data_size": 63488 00:11:03.569 }, 00:11:03.569 { 00:11:03.569 "name": "BaseBdev2", 00:11:03.569 "uuid": "c5a4ad65-5ce9-58f1-8cce-ee02f86436a8", 00:11:03.569 "is_configured": true, 00:11:03.569 "data_offset": 2048, 00:11:03.569 "data_size": 63488 00:11:03.569 }, 00:11:03.569 { 00:11:03.569 "name": "BaseBdev3", 00:11:03.569 "uuid": "cda11553-6b61-58c1-8719-7c1563f3fc28", 00:11:03.569 "is_configured": true, 00:11:03.569 "data_offset": 2048, 00:11:03.569 "data_size": 63488 00:11:03.569 } 00:11:03.569 ] 00:11:03.569 }' 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.569 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.829 [2024-11-27 09:48:04.891063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.829 [2024-11-27 09:48:04.891162] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.829 [2024-11-27 09:48:04.894444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.829 [2024-11-27 09:48:04.894545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.829 [2024-11-27 09:48:04.894606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.829 [2024-11-27 09:48:04.894622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:03.829 { 00:11:03.829 "results": [ 00:11:03.829 { 00:11:03.829 "job": "raid_bdev1", 00:11:03.829 "core_mask": "0x1", 00:11:03.829 "workload": "randrw", 00:11:03.829 "percentage": 50, 00:11:03.829 "status": "finished", 00:11:03.829 "queue_depth": 1, 00:11:03.829 "io_size": 131072, 00:11:03.829 "runtime": 1.386277, 00:11:03.829 "iops": 13171.970681184208, 00:11:03.829 "mibps": 1646.496335148026, 00:11:03.829 "io_failed": 1, 00:11:03.829 "io_timeout": 0, 00:11:03.829 "avg_latency_us": 106.47016877307188, 00:11:03.829 "min_latency_us": 26.382532751091702, 00:11:03.829 "max_latency_us": 1466.6899563318777 00:11:03.829 } 00:11:03.829 ], 00:11:03.829 "core_count": 1 00:11:03.829 } 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67368 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67368 ']' 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67368 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67368 00:11:03.829 killing process with pid 67368 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67368' 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67368 00:11:03.829 [2024-11-27 09:48:04.939112] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.829 09:48:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67368 00:11:04.095 [2024-11-27 09:48:05.201037] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O0YZ7uKoWQ 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.486 ************************************ 00:11:05.486 END TEST raid_read_error_test 00:11:05.486 ************************************ 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:05.486 00:11:05.486 real 0m4.805s 00:11:05.486 user 0m5.577s 00:11:05.486 sys 0m0.708s 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.486 09:48:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.486 09:48:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:05.486 09:48:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:05.486 09:48:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.486 09:48:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.486 ************************************ 00:11:05.486 START TEST raid_write_error_test 00:11:05.486 ************************************ 00:11:05.486 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.487 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O925BRJtsP 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67519 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67519 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67519 ']' 00:11:05.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.748 09:48:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.748 [2024-11-27 09:48:06.724836] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:05.748 [2024-11-27 09:48:06.725162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67519 ] 00:11:06.009 [2024-11-27 09:48:06.906236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.009 [2024-11-27 09:48:07.048199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.268 [2024-11-27 09:48:07.280321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.268 [2024-11-27 09:48:07.280407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.527 BaseBdev1_malloc 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.527 true 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:06.527 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.528 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.528 [2024-11-27 09:48:07.641130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:06.528 [2024-11-27 09:48:07.641247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.528 [2024-11-27 09:48:07.641296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:06.528 [2024-11-27 09:48:07.641341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.528 [2024-11-27 09:48:07.643814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.528 [2024-11-27 09:48:07.643897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:06.528 BaseBdev1 00:11:06.528 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.528 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.528 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:06.528 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.528 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.787 BaseBdev2_malloc 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.787 true 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.787 [2024-11-27 09:48:07.713059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:06.787 [2024-11-27 09:48:07.713162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.787 [2024-11-27 09:48:07.713183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:06.787 [2024-11-27 09:48:07.713194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.787 [2024-11-27 09:48:07.715685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.787 [2024-11-27 09:48:07.715727] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:06.787 BaseBdev2 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.787 BaseBdev3_malloc 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.787 true 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:06.787 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 [2024-11-27 09:48:07.817275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:06.788 [2024-11-27 09:48:07.817379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.788 [2024-11-27 09:48:07.817404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:06.788 [2024-11-27 09:48:07.817415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.788 [2024-11-27 09:48:07.819802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.788 [2024-11-27 09:48:07.819843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:06.788 BaseBdev3 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 [2024-11-27 09:48:07.829364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.788 [2024-11-27 09:48:07.831464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.788 [2024-11-27 09:48:07.831610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.788 [2024-11-27 09:48:07.831836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:06.788 [2024-11-27 09:48:07.831850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:06.788 [2024-11-27 09:48:07.832130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:06.788 [2024-11-27 09:48:07.832312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:06.788 [2024-11-27 09:48:07.832335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:06.788 [2024-11-27 09:48:07.832492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.788 "name": "raid_bdev1", 00:11:06.788 "uuid": "477e982a-66d2-4984-9310-3dfb43b0a1a6", 00:11:06.788 "strip_size_kb": 64, 00:11:06.788 "state": "online", 00:11:06.788 "raid_level": "concat", 00:11:06.788 "superblock": true, 00:11:06.788 "num_base_bdevs": 3, 00:11:06.788 "num_base_bdevs_discovered": 3, 00:11:06.788 "num_base_bdevs_operational": 3, 00:11:06.788 "base_bdevs_list": [ 00:11:06.788 { 00:11:06.788 "name": "BaseBdev1", 00:11:06.788 "uuid": "ffd85cf1-4528-5443-9c11-7aafceed1694", 00:11:06.788 "is_configured": true, 00:11:06.788 "data_offset": 2048, 00:11:06.788 "data_size": 63488 00:11:06.788 }, 00:11:06.788 { 00:11:06.788 "name": "BaseBdev2", 00:11:06.788 "uuid": "7967a24a-fd84-5cab-9a6d-71d13ba53947", 00:11:06.788 "is_configured": true, 00:11:06.788 "data_offset": 2048, 00:11:06.788 "data_size": 63488 00:11:06.788 }, 00:11:06.788 { 00:11:06.788 "name": "BaseBdev3", 00:11:06.788 "uuid": "32b7ea65-4ab9-5439-8e6f-60e552bd8fa7", 00:11:06.788 "is_configured": true, 00:11:06.788 "data_offset": 2048, 00:11:06.788 "data_size": 63488 00:11:06.788 } 00:11:06.788 ] 00:11:06.788 }' 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.788 09:48:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.357 09:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:07.357 09:48:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:07.357 [2024-11-27 09:48:08.377924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:08.294 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.295 "name": "raid_bdev1", 00:11:08.295 "uuid": "477e982a-66d2-4984-9310-3dfb43b0a1a6", 00:11:08.295 "strip_size_kb": 64, 00:11:08.295 "state": "online", 00:11:08.295 "raid_level": "concat", 00:11:08.295 "superblock": true, 00:11:08.295 "num_base_bdevs": 3, 00:11:08.295 "num_base_bdevs_discovered": 3, 00:11:08.295 "num_base_bdevs_operational": 3, 00:11:08.295 "base_bdevs_list": [ 00:11:08.295 { 00:11:08.295 "name": "BaseBdev1", 00:11:08.295 "uuid": "ffd85cf1-4528-5443-9c11-7aafceed1694", 00:11:08.295 "is_configured": true, 00:11:08.295 "data_offset": 2048, 00:11:08.295 "data_size": 63488 00:11:08.295 }, 00:11:08.295 { 00:11:08.295 "name": "BaseBdev2", 00:11:08.295 "uuid": "7967a24a-fd84-5cab-9a6d-71d13ba53947", 00:11:08.295 "is_configured": true, 00:11:08.295 "data_offset": 2048, 00:11:08.295 "data_size": 63488 00:11:08.295 }, 00:11:08.295 { 00:11:08.295 "name": "BaseBdev3", 00:11:08.295 "uuid": "32b7ea65-4ab9-5439-8e6f-60e552bd8fa7", 00:11:08.295 "is_configured": true, 00:11:08.295 "data_offset": 2048, 00:11:08.295 "data_size": 63488 00:11:08.295 } 00:11:08.295 ] 00:11:08.295 }' 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.295 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.555 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.555 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.555 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.555 [2024-11-27 09:48:09.682386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.555 [2024-11-27 09:48:09.682505] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.555 [2024-11-27 09:48:09.685631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.555 [2024-11-27 09:48:09.685681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.555 [2024-11-27 09:48:09.685722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.555 [2024-11-27 09:48:09.685735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:08.816 { 00:11:08.816 "results": [ 00:11:08.816 { 00:11:08.816 "job": "raid_bdev1", 00:11:08.816 "core_mask": "0x1", 00:11:08.816 "workload": "randrw", 00:11:08.816 "percentage": 50, 00:11:08.816 "status": "finished", 00:11:08.816 "queue_depth": 1, 00:11:08.816 "io_size": 131072, 00:11:08.816 "runtime": 1.304715, 00:11:08.816 "iops": 13885.791149791334, 00:11:08.816 "mibps": 1735.7238937239167, 00:11:08.816 "io_failed": 1, 00:11:08.816 "io_timeout": 0, 00:11:08.816 "avg_latency_us": 100.98269982660975, 00:11:08.816 "min_latency_us": 25.6, 00:11:08.816 "max_latency_us": 1345.0620087336245 00:11:08.816 } 00:11:08.816 ], 00:11:08.816 "core_count": 1 00:11:08.816 } 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67519 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67519 ']' 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67519 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67519 00:11:08.816 killing process with pid 67519 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67519' 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67519 00:11:08.816 [2024-11-27 09:48:09.722193] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.816 09:48:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67519 00:11:09.075 [2024-11-27 09:48:09.965135] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O925BRJtsP 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:11:10.455 00:11:10.455 real 0m4.639s 00:11:10.455 user 0m5.312s 00:11:10.455 sys 0m0.713s 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.455 ************************************ 00:11:10.455 END TEST raid_write_error_test 00:11:10.455 ************************************ 00:11:10.455 09:48:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.455 09:48:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:10.455 09:48:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:10.455 09:48:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:10.455 09:48:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.455 09:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.455 ************************************ 00:11:10.455 START TEST raid_state_function_test 00:11:10.455 ************************************ 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67663 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67663' 00:11:10.455 Process raid pid: 67663 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67663 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67663 ']' 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.455 09:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.455 [2024-11-27 09:48:11.435502] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:10.455 [2024-11-27 09:48:11.435787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.715 [2024-11-27 09:48:11.615225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.715 [2024-11-27 09:48:11.756532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.975 [2024-11-27 09:48:12.003834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.975 [2024-11-27 09:48:12.003889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.235 [2024-11-27 09:48:12.281106] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.235 [2024-11-27 09:48:12.281180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.235 [2024-11-27 09:48:12.281192] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.235 [2024-11-27 09:48:12.281201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.235 [2024-11-27 09:48:12.281208] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.235 [2024-11-27 09:48:12.281216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.235 "name": "Existed_Raid", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "strip_size_kb": 0, 00:11:11.235 "state": "configuring", 00:11:11.235 "raid_level": "raid1", 00:11:11.235 "superblock": false, 00:11:11.235 "num_base_bdevs": 3, 00:11:11.235 "num_base_bdevs_discovered": 0, 00:11:11.235 "num_base_bdevs_operational": 3, 00:11:11.235 "base_bdevs_list": [ 00:11:11.235 { 00:11:11.235 "name": "BaseBdev1", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "is_configured": false, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 0 00:11:11.235 }, 00:11:11.235 { 00:11:11.235 "name": "BaseBdev2", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "is_configured": false, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 0 00:11:11.235 }, 00:11:11.235 { 00:11:11.235 "name": "BaseBdev3", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "is_configured": false, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 0 00:11:11.235 } 00:11:11.235 ] 00:11:11.235 }' 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.235 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.804 [2024-11-27 09:48:12.756264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.804 [2024-11-27 09:48:12.756375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.804 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.804 [2024-11-27 09:48:12.764229] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.804 [2024-11-27 09:48:12.764327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.804 [2024-11-27 09:48:12.764360] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.804 [2024-11-27 09:48:12.764388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.805 [2024-11-27 09:48:12.764410] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.805 [2024-11-27 09:48:12.764435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.805 [2024-11-27 09:48:12.818919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.805 BaseBdev1 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.805 [ 00:11:11.805 { 00:11:11.805 "name": "BaseBdev1", 00:11:11.805 "aliases": [ 00:11:11.805 "587f7f6a-7d7a-4de8-a073-d719abb30240" 00:11:11.805 ], 00:11:11.805 "product_name": "Malloc disk", 00:11:11.805 "block_size": 512, 00:11:11.805 "num_blocks": 65536, 00:11:11.805 "uuid": "587f7f6a-7d7a-4de8-a073-d719abb30240", 00:11:11.805 "assigned_rate_limits": { 00:11:11.805 "rw_ios_per_sec": 0, 00:11:11.805 "rw_mbytes_per_sec": 0, 00:11:11.805 "r_mbytes_per_sec": 0, 00:11:11.805 "w_mbytes_per_sec": 0 00:11:11.805 }, 00:11:11.805 "claimed": true, 00:11:11.805 "claim_type": "exclusive_write", 00:11:11.805 "zoned": false, 00:11:11.805 "supported_io_types": { 00:11:11.805 "read": true, 00:11:11.805 "write": true, 00:11:11.805 "unmap": true, 00:11:11.805 "flush": true, 00:11:11.805 "reset": true, 00:11:11.805 "nvme_admin": false, 00:11:11.805 "nvme_io": false, 00:11:11.805 "nvme_io_md": false, 00:11:11.805 "write_zeroes": true, 00:11:11.805 "zcopy": true, 00:11:11.805 "get_zone_info": false, 00:11:11.805 "zone_management": false, 00:11:11.805 "zone_append": false, 00:11:11.805 "compare": false, 00:11:11.805 "compare_and_write": false, 00:11:11.805 "abort": true, 00:11:11.805 "seek_hole": false, 00:11:11.805 "seek_data": false, 00:11:11.805 "copy": true, 00:11:11.805 "nvme_iov_md": false 00:11:11.805 }, 00:11:11.805 "memory_domains": [ 00:11:11.805 { 00:11:11.805 "dma_device_id": "system", 00:11:11.805 "dma_device_type": 1 00:11:11.805 }, 00:11:11.805 { 00:11:11.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.805 "dma_device_type": 2 00:11:11.805 } 00:11:11.805 ], 00:11:11.805 "driver_specific": {} 00:11:11.805 } 00:11:11.805 ] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.805 "name": "Existed_Raid", 00:11:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.805 "strip_size_kb": 0, 00:11:11.805 "state": "configuring", 00:11:11.805 "raid_level": "raid1", 00:11:11.805 "superblock": false, 00:11:11.805 "num_base_bdevs": 3, 00:11:11.805 "num_base_bdevs_discovered": 1, 00:11:11.805 "num_base_bdevs_operational": 3, 00:11:11.805 "base_bdevs_list": [ 00:11:11.805 { 00:11:11.805 "name": "BaseBdev1", 00:11:11.805 "uuid": "587f7f6a-7d7a-4de8-a073-d719abb30240", 00:11:11.805 "is_configured": true, 00:11:11.805 "data_offset": 0, 00:11:11.805 "data_size": 65536 00:11:11.805 }, 00:11:11.805 { 00:11:11.805 "name": "BaseBdev2", 00:11:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.805 "is_configured": false, 00:11:11.805 "data_offset": 0, 00:11:11.805 "data_size": 0 00:11:11.805 }, 00:11:11.805 { 00:11:11.805 "name": "BaseBdev3", 00:11:11.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.805 "is_configured": false, 00:11:11.805 "data_offset": 0, 00:11:11.805 "data_size": 0 00:11:11.805 } 00:11:11.805 ] 00:11:11.805 }' 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.805 09:48:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.374 [2024-11-27 09:48:13.282164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.374 [2024-11-27 09:48:13.282292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.374 [2024-11-27 09:48:13.294175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.374 [2024-11-27 09:48:13.296322] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:12.374 [2024-11-27 09:48:13.296418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:12.374 [2024-11-27 09:48:13.296434] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:12.374 [2024-11-27 09:48:13.296445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.374 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.374 "name": "Existed_Raid", 00:11:12.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.374 "strip_size_kb": 0, 00:11:12.374 "state": "configuring", 00:11:12.374 "raid_level": "raid1", 00:11:12.374 "superblock": false, 00:11:12.374 "num_base_bdevs": 3, 00:11:12.374 "num_base_bdevs_discovered": 1, 00:11:12.374 "num_base_bdevs_operational": 3, 00:11:12.374 "base_bdevs_list": [ 00:11:12.374 { 00:11:12.374 "name": "BaseBdev1", 00:11:12.374 "uuid": "587f7f6a-7d7a-4de8-a073-d719abb30240", 00:11:12.375 "is_configured": true, 00:11:12.375 "data_offset": 0, 00:11:12.375 "data_size": 65536 00:11:12.375 }, 00:11:12.375 { 00:11:12.375 "name": "BaseBdev2", 00:11:12.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.375 "is_configured": false, 00:11:12.375 "data_offset": 0, 00:11:12.375 "data_size": 0 00:11:12.375 }, 00:11:12.375 { 00:11:12.375 "name": "BaseBdev3", 00:11:12.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.375 "is_configured": false, 00:11:12.375 "data_offset": 0, 00:11:12.375 "data_size": 0 00:11:12.375 } 00:11:12.375 ] 00:11:12.375 }' 00:11:12.375 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.375 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.634 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.634 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.634 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.892 [2024-11-27 09:48:13.790420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.892 BaseBdev2 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.892 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.892 [ 00:11:12.892 { 00:11:12.892 "name": "BaseBdev2", 00:11:12.892 "aliases": [ 00:11:12.892 "2661d14a-3674-40aa-87ff-3bcff88a0254" 00:11:12.892 ], 00:11:12.892 "product_name": "Malloc disk", 00:11:12.892 "block_size": 512, 00:11:12.892 "num_blocks": 65536, 00:11:12.892 "uuid": "2661d14a-3674-40aa-87ff-3bcff88a0254", 00:11:12.892 "assigned_rate_limits": { 00:11:12.892 "rw_ios_per_sec": 0, 00:11:12.892 "rw_mbytes_per_sec": 0, 00:11:12.892 "r_mbytes_per_sec": 0, 00:11:12.892 "w_mbytes_per_sec": 0 00:11:12.892 }, 00:11:12.892 "claimed": true, 00:11:12.892 "claim_type": "exclusive_write", 00:11:12.892 "zoned": false, 00:11:12.892 "supported_io_types": { 00:11:12.892 "read": true, 00:11:12.892 "write": true, 00:11:12.892 "unmap": true, 00:11:12.892 "flush": true, 00:11:12.892 "reset": true, 00:11:12.892 "nvme_admin": false, 00:11:12.892 "nvme_io": false, 00:11:12.892 "nvme_io_md": false, 00:11:12.892 "write_zeroes": true, 00:11:12.892 "zcopy": true, 00:11:12.892 "get_zone_info": false, 00:11:12.892 "zone_management": false, 00:11:12.892 "zone_append": false, 00:11:12.892 "compare": false, 00:11:12.892 "compare_and_write": false, 00:11:12.892 "abort": true, 00:11:12.893 "seek_hole": false, 00:11:12.893 "seek_data": false, 00:11:12.893 "copy": true, 00:11:12.893 "nvme_iov_md": false 00:11:12.893 }, 00:11:12.893 "memory_domains": [ 00:11:12.893 { 00:11:12.893 "dma_device_id": "system", 00:11:12.893 "dma_device_type": 1 00:11:12.893 }, 00:11:12.893 { 00:11:12.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.893 "dma_device_type": 2 00:11:12.893 } 00:11:12.893 ], 00:11:12.893 "driver_specific": {} 00:11:12.893 } 00:11:12.893 ] 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.893 "name": "Existed_Raid", 00:11:12.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.893 "strip_size_kb": 0, 00:11:12.893 "state": "configuring", 00:11:12.893 "raid_level": "raid1", 00:11:12.893 "superblock": false, 00:11:12.893 "num_base_bdevs": 3, 00:11:12.893 "num_base_bdevs_discovered": 2, 00:11:12.893 "num_base_bdevs_operational": 3, 00:11:12.893 "base_bdevs_list": [ 00:11:12.893 { 00:11:12.893 "name": "BaseBdev1", 00:11:12.893 "uuid": "587f7f6a-7d7a-4de8-a073-d719abb30240", 00:11:12.893 "is_configured": true, 00:11:12.893 "data_offset": 0, 00:11:12.893 "data_size": 65536 00:11:12.893 }, 00:11:12.893 { 00:11:12.893 "name": "BaseBdev2", 00:11:12.893 "uuid": "2661d14a-3674-40aa-87ff-3bcff88a0254", 00:11:12.893 "is_configured": true, 00:11:12.893 "data_offset": 0, 00:11:12.893 "data_size": 65536 00:11:12.893 }, 00:11:12.893 { 00:11:12.893 "name": "BaseBdev3", 00:11:12.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.893 "is_configured": false, 00:11:12.893 "data_offset": 0, 00:11:12.893 "data_size": 0 00:11:12.893 } 00:11:12.893 ] 00:11:12.893 }' 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.893 09:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.151 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:13.151 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.151 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.411 [2024-11-27 09:48:14.332145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:13.411 [2024-11-27 09:48:14.332211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:13.411 [2024-11-27 09:48:14.332225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:13.411 [2024-11-27 09:48:14.332546] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:13.411 [2024-11-27 09:48:14.332761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:13.411 [2024-11-27 09:48:14.332772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:13.411 [2024-11-27 09:48:14.333096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.411 BaseBdev3 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.411 [ 00:11:13.411 { 00:11:13.411 "name": "BaseBdev3", 00:11:13.411 "aliases": [ 00:11:13.411 "679d1370-748c-47a6-bb25-0f2ec3d42d64" 00:11:13.411 ], 00:11:13.411 "product_name": "Malloc disk", 00:11:13.411 "block_size": 512, 00:11:13.411 "num_blocks": 65536, 00:11:13.411 "uuid": "679d1370-748c-47a6-bb25-0f2ec3d42d64", 00:11:13.411 "assigned_rate_limits": { 00:11:13.411 "rw_ios_per_sec": 0, 00:11:13.411 "rw_mbytes_per_sec": 0, 00:11:13.411 "r_mbytes_per_sec": 0, 00:11:13.411 "w_mbytes_per_sec": 0 00:11:13.411 }, 00:11:13.411 "claimed": true, 00:11:13.411 "claim_type": "exclusive_write", 00:11:13.411 "zoned": false, 00:11:13.411 "supported_io_types": { 00:11:13.411 "read": true, 00:11:13.411 "write": true, 00:11:13.411 "unmap": true, 00:11:13.411 "flush": true, 00:11:13.411 "reset": true, 00:11:13.411 "nvme_admin": false, 00:11:13.411 "nvme_io": false, 00:11:13.411 "nvme_io_md": false, 00:11:13.411 "write_zeroes": true, 00:11:13.411 "zcopy": true, 00:11:13.411 "get_zone_info": false, 00:11:13.411 "zone_management": false, 00:11:13.411 "zone_append": false, 00:11:13.411 "compare": false, 00:11:13.411 "compare_and_write": false, 00:11:13.411 "abort": true, 00:11:13.411 "seek_hole": false, 00:11:13.411 "seek_data": false, 00:11:13.411 "copy": true, 00:11:13.411 "nvme_iov_md": false 00:11:13.411 }, 00:11:13.411 "memory_domains": [ 00:11:13.411 { 00:11:13.411 "dma_device_id": "system", 00:11:13.411 "dma_device_type": 1 00:11:13.411 }, 00:11:13.411 { 00:11:13.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.411 "dma_device_type": 2 00:11:13.411 } 00:11:13.411 ], 00:11:13.411 "driver_specific": {} 00:11:13.411 } 00:11:13.411 ] 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.411 "name": "Existed_Raid", 00:11:13.411 "uuid": "5c9204db-7346-467f-ba8d-f320d3e26c38", 00:11:13.411 "strip_size_kb": 0, 00:11:13.411 "state": "online", 00:11:13.411 "raid_level": "raid1", 00:11:13.411 "superblock": false, 00:11:13.411 "num_base_bdevs": 3, 00:11:13.411 "num_base_bdevs_discovered": 3, 00:11:13.411 "num_base_bdevs_operational": 3, 00:11:13.411 "base_bdevs_list": [ 00:11:13.411 { 00:11:13.411 "name": "BaseBdev1", 00:11:13.411 "uuid": "587f7f6a-7d7a-4de8-a073-d719abb30240", 00:11:13.411 "is_configured": true, 00:11:13.411 "data_offset": 0, 00:11:13.411 "data_size": 65536 00:11:13.411 }, 00:11:13.411 { 00:11:13.411 "name": "BaseBdev2", 00:11:13.411 "uuid": "2661d14a-3674-40aa-87ff-3bcff88a0254", 00:11:13.411 "is_configured": true, 00:11:13.411 "data_offset": 0, 00:11:13.411 "data_size": 65536 00:11:13.411 }, 00:11:13.411 { 00:11:13.411 "name": "BaseBdev3", 00:11:13.411 "uuid": "679d1370-748c-47a6-bb25-0f2ec3d42d64", 00:11:13.411 "is_configured": true, 00:11:13.411 "data_offset": 0, 00:11:13.411 "data_size": 65536 00:11:13.411 } 00:11:13.411 ] 00:11:13.411 }' 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.411 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.980 [2024-11-27 09:48:14.867640] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.980 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.980 "name": "Existed_Raid", 00:11:13.980 "aliases": [ 00:11:13.980 "5c9204db-7346-467f-ba8d-f320d3e26c38" 00:11:13.980 ], 00:11:13.980 "product_name": "Raid Volume", 00:11:13.980 "block_size": 512, 00:11:13.980 "num_blocks": 65536, 00:11:13.980 "uuid": "5c9204db-7346-467f-ba8d-f320d3e26c38", 00:11:13.980 "assigned_rate_limits": { 00:11:13.980 "rw_ios_per_sec": 0, 00:11:13.980 "rw_mbytes_per_sec": 0, 00:11:13.980 "r_mbytes_per_sec": 0, 00:11:13.980 "w_mbytes_per_sec": 0 00:11:13.980 }, 00:11:13.980 "claimed": false, 00:11:13.980 "zoned": false, 00:11:13.980 "supported_io_types": { 00:11:13.980 "read": true, 00:11:13.980 "write": true, 00:11:13.980 "unmap": false, 00:11:13.980 "flush": false, 00:11:13.980 "reset": true, 00:11:13.980 "nvme_admin": false, 00:11:13.980 "nvme_io": false, 00:11:13.980 "nvme_io_md": false, 00:11:13.980 "write_zeroes": true, 00:11:13.980 "zcopy": false, 00:11:13.980 "get_zone_info": false, 00:11:13.980 "zone_management": false, 00:11:13.980 "zone_append": false, 00:11:13.980 "compare": false, 00:11:13.980 "compare_and_write": false, 00:11:13.980 "abort": false, 00:11:13.980 "seek_hole": false, 00:11:13.980 "seek_data": false, 00:11:13.980 "copy": false, 00:11:13.980 "nvme_iov_md": false 00:11:13.980 }, 00:11:13.980 "memory_domains": [ 00:11:13.980 { 00:11:13.980 "dma_device_id": "system", 00:11:13.980 "dma_device_type": 1 00:11:13.980 }, 00:11:13.980 { 00:11:13.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.980 "dma_device_type": 2 00:11:13.980 }, 00:11:13.980 { 00:11:13.980 "dma_device_id": "system", 00:11:13.980 "dma_device_type": 1 00:11:13.980 }, 00:11:13.980 { 00:11:13.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.980 "dma_device_type": 2 00:11:13.980 }, 00:11:13.980 { 00:11:13.980 "dma_device_id": "system", 00:11:13.981 "dma_device_type": 1 00:11:13.981 }, 00:11:13.981 { 00:11:13.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.981 "dma_device_type": 2 00:11:13.981 } 00:11:13.981 ], 00:11:13.981 "driver_specific": { 00:11:13.981 "raid": { 00:11:13.981 "uuid": "5c9204db-7346-467f-ba8d-f320d3e26c38", 00:11:13.981 "strip_size_kb": 0, 00:11:13.981 "state": "online", 00:11:13.981 "raid_level": "raid1", 00:11:13.981 "superblock": false, 00:11:13.981 "num_base_bdevs": 3, 00:11:13.981 "num_base_bdevs_discovered": 3, 00:11:13.981 "num_base_bdevs_operational": 3, 00:11:13.981 "base_bdevs_list": [ 00:11:13.981 { 00:11:13.981 "name": "BaseBdev1", 00:11:13.981 "uuid": "587f7f6a-7d7a-4de8-a073-d719abb30240", 00:11:13.981 "is_configured": true, 00:11:13.981 "data_offset": 0, 00:11:13.981 "data_size": 65536 00:11:13.981 }, 00:11:13.981 { 00:11:13.981 "name": "BaseBdev2", 00:11:13.981 "uuid": "2661d14a-3674-40aa-87ff-3bcff88a0254", 00:11:13.981 "is_configured": true, 00:11:13.981 "data_offset": 0, 00:11:13.981 "data_size": 65536 00:11:13.981 }, 00:11:13.981 { 00:11:13.981 "name": "BaseBdev3", 00:11:13.981 "uuid": "679d1370-748c-47a6-bb25-0f2ec3d42d64", 00:11:13.981 "is_configured": true, 00:11:13.981 "data_offset": 0, 00:11:13.981 "data_size": 65536 00:11:13.981 } 00:11:13.981 ] 00:11:13.981 } 00:11:13.981 } 00:11:13.981 }' 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.981 BaseBdev2 00:11:13.981 BaseBdev3' 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.981 09:48:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.981 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 [2024-11-27 09:48:15.134874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.239 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.239 "name": "Existed_Raid", 00:11:14.239 "uuid": "5c9204db-7346-467f-ba8d-f320d3e26c38", 00:11:14.239 "strip_size_kb": 0, 00:11:14.239 "state": "online", 00:11:14.239 "raid_level": "raid1", 00:11:14.239 "superblock": false, 00:11:14.240 "num_base_bdevs": 3, 00:11:14.240 "num_base_bdevs_discovered": 2, 00:11:14.240 "num_base_bdevs_operational": 2, 00:11:14.240 "base_bdevs_list": [ 00:11:14.240 { 00:11:14.240 "name": null, 00:11:14.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.240 "is_configured": false, 00:11:14.240 "data_offset": 0, 00:11:14.240 "data_size": 65536 00:11:14.240 }, 00:11:14.240 { 00:11:14.240 "name": "BaseBdev2", 00:11:14.240 "uuid": "2661d14a-3674-40aa-87ff-3bcff88a0254", 00:11:14.240 "is_configured": true, 00:11:14.240 "data_offset": 0, 00:11:14.240 "data_size": 65536 00:11:14.240 }, 00:11:14.240 { 00:11:14.240 "name": "BaseBdev3", 00:11:14.240 "uuid": "679d1370-748c-47a6-bb25-0f2ec3d42d64", 00:11:14.240 "is_configured": true, 00:11:14.240 "data_offset": 0, 00:11:14.240 "data_size": 65536 00:11:14.240 } 00:11:14.240 ] 00:11:14.240 }' 00:11:14.240 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.240 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.805 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.805 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.806 [2024-11-27 09:48:15.752271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.806 09:48:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.806 [2024-11-27 09:48:15.914570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.806 [2024-11-27 09:48:15.914763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.065 [2024-11-27 09:48:16.020511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.065 [2024-11-27 09:48:16.020686] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.065 [2024-11-27 09:48:16.020737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:15.065 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.065 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:15.065 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:15.065 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.065 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.065 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.066 BaseBdev2 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.066 [ 00:11:15.066 { 00:11:15.066 "name": "BaseBdev2", 00:11:15.066 "aliases": [ 00:11:15.066 "b7fb9d28-76c3-4b69-8b56-dc21aa48825c" 00:11:15.066 ], 00:11:15.066 "product_name": "Malloc disk", 00:11:15.066 "block_size": 512, 00:11:15.066 "num_blocks": 65536, 00:11:15.066 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:15.066 "assigned_rate_limits": { 00:11:15.066 "rw_ios_per_sec": 0, 00:11:15.066 "rw_mbytes_per_sec": 0, 00:11:15.066 "r_mbytes_per_sec": 0, 00:11:15.066 "w_mbytes_per_sec": 0 00:11:15.066 }, 00:11:15.066 "claimed": false, 00:11:15.066 "zoned": false, 00:11:15.066 "supported_io_types": { 00:11:15.066 "read": true, 00:11:15.066 "write": true, 00:11:15.066 "unmap": true, 00:11:15.066 "flush": true, 00:11:15.066 "reset": true, 00:11:15.066 "nvme_admin": false, 00:11:15.066 "nvme_io": false, 00:11:15.066 "nvme_io_md": false, 00:11:15.066 "write_zeroes": true, 00:11:15.066 "zcopy": true, 00:11:15.066 "get_zone_info": false, 00:11:15.066 "zone_management": false, 00:11:15.066 "zone_append": false, 00:11:15.066 "compare": false, 00:11:15.066 "compare_and_write": false, 00:11:15.066 "abort": true, 00:11:15.066 "seek_hole": false, 00:11:15.066 "seek_data": false, 00:11:15.066 "copy": true, 00:11:15.066 "nvme_iov_md": false 00:11:15.066 }, 00:11:15.066 "memory_domains": [ 00:11:15.066 { 00:11:15.066 "dma_device_id": "system", 00:11:15.066 "dma_device_type": 1 00:11:15.066 }, 00:11:15.066 { 00:11:15.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.066 "dma_device_type": 2 00:11:15.066 } 00:11:15.066 ], 00:11:15.066 "driver_specific": {} 00:11:15.066 } 00:11:15.066 ] 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.066 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.326 BaseBdev3 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.326 [ 00:11:15.326 { 00:11:15.326 "name": "BaseBdev3", 00:11:15.326 "aliases": [ 00:11:15.326 "5809e776-6c4b-49cf-93c6-55b58121adde" 00:11:15.326 ], 00:11:15.326 "product_name": "Malloc disk", 00:11:15.326 "block_size": 512, 00:11:15.326 "num_blocks": 65536, 00:11:15.326 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:15.326 "assigned_rate_limits": { 00:11:15.326 "rw_ios_per_sec": 0, 00:11:15.326 "rw_mbytes_per_sec": 0, 00:11:15.326 "r_mbytes_per_sec": 0, 00:11:15.326 "w_mbytes_per_sec": 0 00:11:15.326 }, 00:11:15.326 "claimed": false, 00:11:15.326 "zoned": false, 00:11:15.326 "supported_io_types": { 00:11:15.326 "read": true, 00:11:15.326 "write": true, 00:11:15.326 "unmap": true, 00:11:15.326 "flush": true, 00:11:15.326 "reset": true, 00:11:15.326 "nvme_admin": false, 00:11:15.326 "nvme_io": false, 00:11:15.326 "nvme_io_md": false, 00:11:15.326 "write_zeroes": true, 00:11:15.326 "zcopy": true, 00:11:15.326 "get_zone_info": false, 00:11:15.326 "zone_management": false, 00:11:15.326 "zone_append": false, 00:11:15.326 "compare": false, 00:11:15.326 "compare_and_write": false, 00:11:15.326 "abort": true, 00:11:15.326 "seek_hole": false, 00:11:15.326 "seek_data": false, 00:11:15.326 "copy": true, 00:11:15.326 "nvme_iov_md": false 00:11:15.326 }, 00:11:15.326 "memory_domains": [ 00:11:15.326 { 00:11:15.326 "dma_device_id": "system", 00:11:15.326 "dma_device_type": 1 00:11:15.326 }, 00:11:15.326 { 00:11:15.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.326 "dma_device_type": 2 00:11:15.326 } 00:11:15.326 ], 00:11:15.326 "driver_specific": {} 00:11:15.326 } 00:11:15.326 ] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.326 [2024-11-27 09:48:16.253560] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.326 [2024-11-27 09:48:16.253660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.326 [2024-11-27 09:48:16.253704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.326 [2024-11-27 09:48:16.255905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.326 "name": "Existed_Raid", 00:11:15.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.326 "strip_size_kb": 0, 00:11:15.326 "state": "configuring", 00:11:15.326 "raid_level": "raid1", 00:11:15.326 "superblock": false, 00:11:15.326 "num_base_bdevs": 3, 00:11:15.326 "num_base_bdevs_discovered": 2, 00:11:15.326 "num_base_bdevs_operational": 3, 00:11:15.326 "base_bdevs_list": [ 00:11:15.326 { 00:11:15.326 "name": "BaseBdev1", 00:11:15.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.326 "is_configured": false, 00:11:15.326 "data_offset": 0, 00:11:15.326 "data_size": 0 00:11:15.326 }, 00:11:15.326 { 00:11:15.326 "name": "BaseBdev2", 00:11:15.326 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:15.326 "is_configured": true, 00:11:15.326 "data_offset": 0, 00:11:15.326 "data_size": 65536 00:11:15.326 }, 00:11:15.326 { 00:11:15.326 "name": "BaseBdev3", 00:11:15.326 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:15.326 "is_configured": true, 00:11:15.326 "data_offset": 0, 00:11:15.326 "data_size": 65536 00:11:15.326 } 00:11:15.326 ] 00:11:15.326 }' 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.326 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.585 [2024-11-27 09:48:16.652912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.585 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.586 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.586 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.586 "name": "Existed_Raid", 00:11:15.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.586 "strip_size_kb": 0, 00:11:15.586 "state": "configuring", 00:11:15.586 "raid_level": "raid1", 00:11:15.586 "superblock": false, 00:11:15.586 "num_base_bdevs": 3, 00:11:15.586 "num_base_bdevs_discovered": 1, 00:11:15.586 "num_base_bdevs_operational": 3, 00:11:15.586 "base_bdevs_list": [ 00:11:15.586 { 00:11:15.586 "name": "BaseBdev1", 00:11:15.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.586 "is_configured": false, 00:11:15.586 "data_offset": 0, 00:11:15.586 "data_size": 0 00:11:15.586 }, 00:11:15.586 { 00:11:15.586 "name": null, 00:11:15.586 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:15.586 "is_configured": false, 00:11:15.586 "data_offset": 0, 00:11:15.586 "data_size": 65536 00:11:15.586 }, 00:11:15.586 { 00:11:15.586 "name": "BaseBdev3", 00:11:15.586 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:15.586 "is_configured": true, 00:11:15.586 "data_offset": 0, 00:11:15.586 "data_size": 65536 00:11:15.586 } 00:11:15.586 ] 00:11:15.586 }' 00:11:15.586 09:48:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.586 09:48:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.156 [2024-11-27 09:48:17.215438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.156 BaseBdev1 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.156 [ 00:11:16.156 { 00:11:16.156 "name": "BaseBdev1", 00:11:16.156 "aliases": [ 00:11:16.156 "9fd2dcf8-9456-4405-be8d-a89ca615ad7b" 00:11:16.156 ], 00:11:16.156 "product_name": "Malloc disk", 00:11:16.156 "block_size": 512, 00:11:16.156 "num_blocks": 65536, 00:11:16.156 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:16.156 "assigned_rate_limits": { 00:11:16.156 "rw_ios_per_sec": 0, 00:11:16.156 "rw_mbytes_per_sec": 0, 00:11:16.156 "r_mbytes_per_sec": 0, 00:11:16.156 "w_mbytes_per_sec": 0 00:11:16.156 }, 00:11:16.156 "claimed": true, 00:11:16.156 "claim_type": "exclusive_write", 00:11:16.156 "zoned": false, 00:11:16.156 "supported_io_types": { 00:11:16.156 "read": true, 00:11:16.156 "write": true, 00:11:16.156 "unmap": true, 00:11:16.156 "flush": true, 00:11:16.156 "reset": true, 00:11:16.156 "nvme_admin": false, 00:11:16.156 "nvme_io": false, 00:11:16.156 "nvme_io_md": false, 00:11:16.156 "write_zeroes": true, 00:11:16.156 "zcopy": true, 00:11:16.156 "get_zone_info": false, 00:11:16.156 "zone_management": false, 00:11:16.156 "zone_append": false, 00:11:16.156 "compare": false, 00:11:16.156 "compare_and_write": false, 00:11:16.156 "abort": true, 00:11:16.156 "seek_hole": false, 00:11:16.156 "seek_data": false, 00:11:16.156 "copy": true, 00:11:16.156 "nvme_iov_md": false 00:11:16.156 }, 00:11:16.156 "memory_domains": [ 00:11:16.156 { 00:11:16.156 "dma_device_id": "system", 00:11:16.156 "dma_device_type": 1 00:11:16.156 }, 00:11:16.156 { 00:11:16.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.156 "dma_device_type": 2 00:11:16.156 } 00:11:16.156 ], 00:11:16.156 "driver_specific": {} 00:11:16.156 } 00:11:16.156 ] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.156 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.416 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.416 "name": "Existed_Raid", 00:11:16.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.416 "strip_size_kb": 0, 00:11:16.416 "state": "configuring", 00:11:16.416 "raid_level": "raid1", 00:11:16.416 "superblock": false, 00:11:16.416 "num_base_bdevs": 3, 00:11:16.416 "num_base_bdevs_discovered": 2, 00:11:16.416 "num_base_bdevs_operational": 3, 00:11:16.416 "base_bdevs_list": [ 00:11:16.416 { 00:11:16.417 "name": "BaseBdev1", 00:11:16.417 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:16.417 "is_configured": true, 00:11:16.417 "data_offset": 0, 00:11:16.417 "data_size": 65536 00:11:16.417 }, 00:11:16.417 { 00:11:16.417 "name": null, 00:11:16.417 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:16.417 "is_configured": false, 00:11:16.417 "data_offset": 0, 00:11:16.417 "data_size": 65536 00:11:16.417 }, 00:11:16.417 { 00:11:16.417 "name": "BaseBdev3", 00:11:16.417 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:16.417 "is_configured": true, 00:11:16.417 "data_offset": 0, 00:11:16.417 "data_size": 65536 00:11:16.417 } 00:11:16.417 ] 00:11:16.417 }' 00:11:16.417 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.417 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.676 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.677 [2024-11-27 09:48:17.706644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.677 "name": "Existed_Raid", 00:11:16.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.677 "strip_size_kb": 0, 00:11:16.677 "state": "configuring", 00:11:16.677 "raid_level": "raid1", 00:11:16.677 "superblock": false, 00:11:16.677 "num_base_bdevs": 3, 00:11:16.677 "num_base_bdevs_discovered": 1, 00:11:16.677 "num_base_bdevs_operational": 3, 00:11:16.677 "base_bdevs_list": [ 00:11:16.677 { 00:11:16.677 "name": "BaseBdev1", 00:11:16.677 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:16.677 "is_configured": true, 00:11:16.677 "data_offset": 0, 00:11:16.677 "data_size": 65536 00:11:16.677 }, 00:11:16.677 { 00:11:16.677 "name": null, 00:11:16.677 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:16.677 "is_configured": false, 00:11:16.677 "data_offset": 0, 00:11:16.677 "data_size": 65536 00:11:16.677 }, 00:11:16.677 { 00:11:16.677 "name": null, 00:11:16.677 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:16.677 "is_configured": false, 00:11:16.677 "data_offset": 0, 00:11:16.677 "data_size": 65536 00:11:16.677 } 00:11:16.677 ] 00:11:16.677 }' 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.677 09:48:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.246 [2024-11-27 09:48:18.157929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.246 "name": "Existed_Raid", 00:11:17.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.246 "strip_size_kb": 0, 00:11:17.246 "state": "configuring", 00:11:17.246 "raid_level": "raid1", 00:11:17.246 "superblock": false, 00:11:17.246 "num_base_bdevs": 3, 00:11:17.246 "num_base_bdevs_discovered": 2, 00:11:17.246 "num_base_bdevs_operational": 3, 00:11:17.246 "base_bdevs_list": [ 00:11:17.246 { 00:11:17.246 "name": "BaseBdev1", 00:11:17.246 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:17.246 "is_configured": true, 00:11:17.246 "data_offset": 0, 00:11:17.246 "data_size": 65536 00:11:17.246 }, 00:11:17.246 { 00:11:17.246 "name": null, 00:11:17.246 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:17.246 "is_configured": false, 00:11:17.246 "data_offset": 0, 00:11:17.246 "data_size": 65536 00:11:17.246 }, 00:11:17.246 { 00:11:17.246 "name": "BaseBdev3", 00:11:17.246 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:17.246 "is_configured": true, 00:11:17.246 "data_offset": 0, 00:11:17.246 "data_size": 65536 00:11:17.246 } 00:11:17.246 ] 00:11:17.246 }' 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.246 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.505 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.505 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.505 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.505 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.505 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.764 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.764 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.764 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.765 [2024-11-27 09:48:18.653146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.765 "name": "Existed_Raid", 00:11:17.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.765 "strip_size_kb": 0, 00:11:17.765 "state": "configuring", 00:11:17.765 "raid_level": "raid1", 00:11:17.765 "superblock": false, 00:11:17.765 "num_base_bdevs": 3, 00:11:17.765 "num_base_bdevs_discovered": 1, 00:11:17.765 "num_base_bdevs_operational": 3, 00:11:17.765 "base_bdevs_list": [ 00:11:17.765 { 00:11:17.765 "name": null, 00:11:17.765 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:17.765 "is_configured": false, 00:11:17.765 "data_offset": 0, 00:11:17.765 "data_size": 65536 00:11:17.765 }, 00:11:17.765 { 00:11:17.765 "name": null, 00:11:17.765 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:17.765 "is_configured": false, 00:11:17.765 "data_offset": 0, 00:11:17.765 "data_size": 65536 00:11:17.765 }, 00:11:17.765 { 00:11:17.765 "name": "BaseBdev3", 00:11:17.765 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:17.765 "is_configured": true, 00:11:17.765 "data_offset": 0, 00:11:17.765 "data_size": 65536 00:11:17.765 } 00:11:17.765 ] 00:11:17.765 }' 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.765 09:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.333 [2024-11-27 09:48:19.255682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.333 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.333 "name": "Existed_Raid", 00:11:18.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.333 "strip_size_kb": 0, 00:11:18.333 "state": "configuring", 00:11:18.333 "raid_level": "raid1", 00:11:18.333 "superblock": false, 00:11:18.333 "num_base_bdevs": 3, 00:11:18.333 "num_base_bdevs_discovered": 2, 00:11:18.333 "num_base_bdevs_operational": 3, 00:11:18.333 "base_bdevs_list": [ 00:11:18.333 { 00:11:18.333 "name": null, 00:11:18.333 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:18.334 "is_configured": false, 00:11:18.334 "data_offset": 0, 00:11:18.334 "data_size": 65536 00:11:18.334 }, 00:11:18.334 { 00:11:18.334 "name": "BaseBdev2", 00:11:18.334 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:18.334 "is_configured": true, 00:11:18.334 "data_offset": 0, 00:11:18.334 "data_size": 65536 00:11:18.334 }, 00:11:18.334 { 00:11:18.334 "name": "BaseBdev3", 00:11:18.334 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:18.334 "is_configured": true, 00:11:18.334 "data_offset": 0, 00:11:18.334 "data_size": 65536 00:11:18.334 } 00:11:18.334 ] 00:11:18.334 }' 00:11:18.334 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.334 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.903 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.903 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9fd2dcf8-9456-4405-be8d-a89ca615ad7b 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 [2024-11-27 09:48:19.889513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.904 [2024-11-27 09:48:19.889586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:18.904 [2024-11-27 09:48:19.889594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:18.904 [2024-11-27 09:48:19.889867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:18.904 [2024-11-27 09:48:19.890092] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:18.904 [2024-11-27 09:48:19.890107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:18.904 [2024-11-27 09:48:19.890418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.904 NewBaseBdev 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 [ 00:11:18.904 { 00:11:18.904 "name": "NewBaseBdev", 00:11:18.904 "aliases": [ 00:11:18.904 "9fd2dcf8-9456-4405-be8d-a89ca615ad7b" 00:11:18.904 ], 00:11:18.904 "product_name": "Malloc disk", 00:11:18.904 "block_size": 512, 00:11:18.904 "num_blocks": 65536, 00:11:18.904 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:18.904 "assigned_rate_limits": { 00:11:18.904 "rw_ios_per_sec": 0, 00:11:18.904 "rw_mbytes_per_sec": 0, 00:11:18.904 "r_mbytes_per_sec": 0, 00:11:18.904 "w_mbytes_per_sec": 0 00:11:18.904 }, 00:11:18.904 "claimed": true, 00:11:18.904 "claim_type": "exclusive_write", 00:11:18.904 "zoned": false, 00:11:18.904 "supported_io_types": { 00:11:18.904 "read": true, 00:11:18.904 "write": true, 00:11:18.904 "unmap": true, 00:11:18.904 "flush": true, 00:11:18.904 "reset": true, 00:11:18.904 "nvme_admin": false, 00:11:18.904 "nvme_io": false, 00:11:18.904 "nvme_io_md": false, 00:11:18.904 "write_zeroes": true, 00:11:18.904 "zcopy": true, 00:11:18.904 "get_zone_info": false, 00:11:18.904 "zone_management": false, 00:11:18.904 "zone_append": false, 00:11:18.904 "compare": false, 00:11:18.904 "compare_and_write": false, 00:11:18.904 "abort": true, 00:11:18.904 "seek_hole": false, 00:11:18.904 "seek_data": false, 00:11:18.904 "copy": true, 00:11:18.904 "nvme_iov_md": false 00:11:18.904 }, 00:11:18.904 "memory_domains": [ 00:11:18.904 { 00:11:18.904 "dma_device_id": "system", 00:11:18.904 "dma_device_type": 1 00:11:18.904 }, 00:11:18.904 { 00:11:18.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.904 "dma_device_type": 2 00:11:18.904 } 00:11:18.904 ], 00:11:18.904 "driver_specific": {} 00:11:18.904 } 00:11:18.904 ] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.904 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.904 "name": "Existed_Raid", 00:11:18.904 "uuid": "03bb63c0-cf15-45bb-8c54-250cb8413a96", 00:11:18.904 "strip_size_kb": 0, 00:11:18.904 "state": "online", 00:11:18.904 "raid_level": "raid1", 00:11:18.904 "superblock": false, 00:11:18.904 "num_base_bdevs": 3, 00:11:18.904 "num_base_bdevs_discovered": 3, 00:11:18.904 "num_base_bdevs_operational": 3, 00:11:18.904 "base_bdevs_list": [ 00:11:18.904 { 00:11:18.904 "name": "NewBaseBdev", 00:11:18.904 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:18.904 "is_configured": true, 00:11:18.904 "data_offset": 0, 00:11:18.904 "data_size": 65536 00:11:18.904 }, 00:11:18.904 { 00:11:18.905 "name": "BaseBdev2", 00:11:18.905 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:18.905 "is_configured": true, 00:11:18.905 "data_offset": 0, 00:11:18.905 "data_size": 65536 00:11:18.905 }, 00:11:18.905 { 00:11:18.905 "name": "BaseBdev3", 00:11:18.905 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:18.905 "is_configured": true, 00:11:18.905 "data_offset": 0, 00:11:18.905 "data_size": 65536 00:11:18.905 } 00:11:18.905 ] 00:11:18.905 }' 00:11:18.905 09:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.905 09:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.475 [2024-11-27 09:48:20.445002] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.475 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.475 "name": "Existed_Raid", 00:11:19.475 "aliases": [ 00:11:19.475 "03bb63c0-cf15-45bb-8c54-250cb8413a96" 00:11:19.475 ], 00:11:19.475 "product_name": "Raid Volume", 00:11:19.475 "block_size": 512, 00:11:19.475 "num_blocks": 65536, 00:11:19.475 "uuid": "03bb63c0-cf15-45bb-8c54-250cb8413a96", 00:11:19.475 "assigned_rate_limits": { 00:11:19.475 "rw_ios_per_sec": 0, 00:11:19.475 "rw_mbytes_per_sec": 0, 00:11:19.475 "r_mbytes_per_sec": 0, 00:11:19.475 "w_mbytes_per_sec": 0 00:11:19.475 }, 00:11:19.475 "claimed": false, 00:11:19.475 "zoned": false, 00:11:19.475 "supported_io_types": { 00:11:19.475 "read": true, 00:11:19.475 "write": true, 00:11:19.475 "unmap": false, 00:11:19.475 "flush": false, 00:11:19.475 "reset": true, 00:11:19.475 "nvme_admin": false, 00:11:19.475 "nvme_io": false, 00:11:19.475 "nvme_io_md": false, 00:11:19.475 "write_zeroes": true, 00:11:19.475 "zcopy": false, 00:11:19.475 "get_zone_info": false, 00:11:19.475 "zone_management": false, 00:11:19.475 "zone_append": false, 00:11:19.475 "compare": false, 00:11:19.475 "compare_and_write": false, 00:11:19.475 "abort": false, 00:11:19.475 "seek_hole": false, 00:11:19.475 "seek_data": false, 00:11:19.475 "copy": false, 00:11:19.475 "nvme_iov_md": false 00:11:19.475 }, 00:11:19.475 "memory_domains": [ 00:11:19.475 { 00:11:19.475 "dma_device_id": "system", 00:11:19.475 "dma_device_type": 1 00:11:19.475 }, 00:11:19.475 { 00:11:19.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.475 "dma_device_type": 2 00:11:19.475 }, 00:11:19.475 { 00:11:19.475 "dma_device_id": "system", 00:11:19.475 "dma_device_type": 1 00:11:19.475 }, 00:11:19.475 { 00:11:19.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.475 "dma_device_type": 2 00:11:19.475 }, 00:11:19.475 { 00:11:19.475 "dma_device_id": "system", 00:11:19.475 "dma_device_type": 1 00:11:19.475 }, 00:11:19.475 { 00:11:19.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.475 "dma_device_type": 2 00:11:19.475 } 00:11:19.475 ], 00:11:19.475 "driver_specific": { 00:11:19.475 "raid": { 00:11:19.475 "uuid": "03bb63c0-cf15-45bb-8c54-250cb8413a96", 00:11:19.475 "strip_size_kb": 0, 00:11:19.475 "state": "online", 00:11:19.475 "raid_level": "raid1", 00:11:19.475 "superblock": false, 00:11:19.475 "num_base_bdevs": 3, 00:11:19.475 "num_base_bdevs_discovered": 3, 00:11:19.475 "num_base_bdevs_operational": 3, 00:11:19.475 "base_bdevs_list": [ 00:11:19.475 { 00:11:19.475 "name": "NewBaseBdev", 00:11:19.475 "uuid": "9fd2dcf8-9456-4405-be8d-a89ca615ad7b", 00:11:19.475 "is_configured": true, 00:11:19.475 "data_offset": 0, 00:11:19.475 "data_size": 65536 00:11:19.475 }, 00:11:19.475 { 00:11:19.475 "name": "BaseBdev2", 00:11:19.475 "uuid": "b7fb9d28-76c3-4b69-8b56-dc21aa48825c", 00:11:19.476 "is_configured": true, 00:11:19.476 "data_offset": 0, 00:11:19.476 "data_size": 65536 00:11:19.476 }, 00:11:19.476 { 00:11:19.476 "name": "BaseBdev3", 00:11:19.476 "uuid": "5809e776-6c4b-49cf-93c6-55b58121adde", 00:11:19.476 "is_configured": true, 00:11:19.476 "data_offset": 0, 00:11:19.476 "data_size": 65536 00:11:19.476 } 00:11:19.476 ] 00:11:19.476 } 00:11:19.476 } 00:11:19.476 }' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:19.476 BaseBdev2 00:11:19.476 BaseBdev3' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.476 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.735 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.735 [2024-11-27 09:48:20.700183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.735 [2024-11-27 09:48:20.700267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.735 [2024-11-27 09:48:20.700405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.736 [2024-11-27 09:48:20.700787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.736 [2024-11-27 09:48:20.700851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67663 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67663 ']' 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67663 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67663 00:11:19.736 killing process with pid 67663 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67663' 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67663 00:11:19.736 [2024-11-27 09:48:20.746870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.736 09:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67663 00:11:19.995 [2024-11-27 09:48:21.082537] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.377 ************************************ 00:11:21.377 END TEST raid_state_function_test 00:11:21.377 ************************************ 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:21.377 00:11:21.377 real 0m10.991s 00:11:21.377 user 0m17.181s 00:11:21.377 sys 0m2.061s 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.377 09:48:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:21.377 09:48:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.377 09:48:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.377 09:48:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.377 ************************************ 00:11:21.377 START TEST raid_state_function_test_sb 00:11:21.377 ************************************ 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.377 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68285 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68285' 00:11:21.378 Process raid pid: 68285 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68285 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68285 ']' 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.378 09:48:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.378 [2024-11-27 09:48:22.483670] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:21.378 [2024-11-27 09:48:22.483860] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.637 [2024-11-27 09:48:22.658804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.897 [2024-11-27 09:48:22.798022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.156 [2024-11-27 09:48:23.037538] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.156 [2024-11-27 09:48:23.037583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.415 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.415 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:22.415 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:22.415 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.415 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.415 [2024-11-27 09:48:23.327270] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.415 [2024-11-27 09:48:23.327333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.416 [2024-11-27 09:48:23.327352] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.416 [2024-11-27 09:48:23.327364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.416 [2024-11-27 09:48:23.327370] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.416 [2024-11-27 09:48:23.327380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.416 "name": "Existed_Raid", 00:11:22.416 "uuid": "12208a4c-9fdf-4552-baaa-03d8fc5d1bea", 00:11:22.416 "strip_size_kb": 0, 00:11:22.416 "state": "configuring", 00:11:22.416 "raid_level": "raid1", 00:11:22.416 "superblock": true, 00:11:22.416 "num_base_bdevs": 3, 00:11:22.416 "num_base_bdevs_discovered": 0, 00:11:22.416 "num_base_bdevs_operational": 3, 00:11:22.416 "base_bdevs_list": [ 00:11:22.416 { 00:11:22.416 "name": "BaseBdev1", 00:11:22.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.416 "is_configured": false, 00:11:22.416 "data_offset": 0, 00:11:22.416 "data_size": 0 00:11:22.416 }, 00:11:22.416 { 00:11:22.416 "name": "BaseBdev2", 00:11:22.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.416 "is_configured": false, 00:11:22.416 "data_offset": 0, 00:11:22.416 "data_size": 0 00:11:22.416 }, 00:11:22.416 { 00:11:22.416 "name": "BaseBdev3", 00:11:22.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.416 "is_configured": false, 00:11:22.416 "data_offset": 0, 00:11:22.416 "data_size": 0 00:11:22.416 } 00:11:22.416 ] 00:11:22.416 }' 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.416 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.675 [2024-11-27 09:48:23.754478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.675 [2024-11-27 09:48:23.754520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.675 [2024-11-27 09:48:23.766439] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.675 [2024-11-27 09:48:23.766484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.675 [2024-11-27 09:48:23.766494] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.675 [2024-11-27 09:48:23.766521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.675 [2024-11-27 09:48:23.766527] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.675 [2024-11-27 09:48:23.766537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.675 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.934 [2024-11-27 09:48:23.821170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.934 BaseBdev1 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.934 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.934 [ 00:11:22.934 { 00:11:22.934 "name": "BaseBdev1", 00:11:22.934 "aliases": [ 00:11:22.934 "23f6e4b7-efc5-49b3-816f-f9d1a058aef9" 00:11:22.934 ], 00:11:22.934 "product_name": "Malloc disk", 00:11:22.934 "block_size": 512, 00:11:22.934 "num_blocks": 65536, 00:11:22.934 "uuid": "23f6e4b7-efc5-49b3-816f-f9d1a058aef9", 00:11:22.934 "assigned_rate_limits": { 00:11:22.934 "rw_ios_per_sec": 0, 00:11:22.934 "rw_mbytes_per_sec": 0, 00:11:22.934 "r_mbytes_per_sec": 0, 00:11:22.934 "w_mbytes_per_sec": 0 00:11:22.934 }, 00:11:22.934 "claimed": true, 00:11:22.934 "claim_type": "exclusive_write", 00:11:22.934 "zoned": false, 00:11:22.934 "supported_io_types": { 00:11:22.934 "read": true, 00:11:22.934 "write": true, 00:11:22.934 "unmap": true, 00:11:22.934 "flush": true, 00:11:22.934 "reset": true, 00:11:22.934 "nvme_admin": false, 00:11:22.934 "nvme_io": false, 00:11:22.934 "nvme_io_md": false, 00:11:22.934 "write_zeroes": true, 00:11:22.934 "zcopy": true, 00:11:22.934 "get_zone_info": false, 00:11:22.934 "zone_management": false, 00:11:22.934 "zone_append": false, 00:11:22.934 "compare": false, 00:11:22.934 "compare_and_write": false, 00:11:22.934 "abort": true, 00:11:22.934 "seek_hole": false, 00:11:22.934 "seek_data": false, 00:11:22.934 "copy": true, 00:11:22.934 "nvme_iov_md": false 00:11:22.934 }, 00:11:22.934 "memory_domains": [ 00:11:22.934 { 00:11:22.935 "dma_device_id": "system", 00:11:22.935 "dma_device_type": 1 00:11:22.935 }, 00:11:22.935 { 00:11:22.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.935 "dma_device_type": 2 00:11:22.935 } 00:11:22.935 ], 00:11:22.935 "driver_specific": {} 00:11:22.935 } 00:11:22.935 ] 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.935 "name": "Existed_Raid", 00:11:22.935 "uuid": "db97ff9d-3593-4484-8f4b-65424bdb16ee", 00:11:22.935 "strip_size_kb": 0, 00:11:22.935 "state": "configuring", 00:11:22.935 "raid_level": "raid1", 00:11:22.935 "superblock": true, 00:11:22.935 "num_base_bdevs": 3, 00:11:22.935 "num_base_bdevs_discovered": 1, 00:11:22.935 "num_base_bdevs_operational": 3, 00:11:22.935 "base_bdevs_list": [ 00:11:22.935 { 00:11:22.935 "name": "BaseBdev1", 00:11:22.935 "uuid": "23f6e4b7-efc5-49b3-816f-f9d1a058aef9", 00:11:22.935 "is_configured": true, 00:11:22.935 "data_offset": 2048, 00:11:22.935 "data_size": 63488 00:11:22.935 }, 00:11:22.935 { 00:11:22.935 "name": "BaseBdev2", 00:11:22.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.935 "is_configured": false, 00:11:22.935 "data_offset": 0, 00:11:22.935 "data_size": 0 00:11:22.935 }, 00:11:22.935 { 00:11:22.935 "name": "BaseBdev3", 00:11:22.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.935 "is_configured": false, 00:11:22.935 "data_offset": 0, 00:11:22.935 "data_size": 0 00:11:22.935 } 00:11:22.935 ] 00:11:22.935 }' 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.935 09:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.194 [2024-11-27 09:48:24.264457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.194 [2024-11-27 09:48:24.264518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.194 [2024-11-27 09:48:24.276476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.194 [2024-11-27 09:48:24.278700] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.194 [2024-11-27 09:48:24.278744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.194 [2024-11-27 09:48:24.278756] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.194 [2024-11-27 09:48:24.278766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.194 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.453 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.453 "name": "Existed_Raid", 00:11:23.453 "uuid": "9647cb8b-f8dd-4b63-8c0d-3d4718958e22", 00:11:23.453 "strip_size_kb": 0, 00:11:23.453 "state": "configuring", 00:11:23.453 "raid_level": "raid1", 00:11:23.453 "superblock": true, 00:11:23.453 "num_base_bdevs": 3, 00:11:23.453 "num_base_bdevs_discovered": 1, 00:11:23.453 "num_base_bdevs_operational": 3, 00:11:23.453 "base_bdevs_list": [ 00:11:23.453 { 00:11:23.453 "name": "BaseBdev1", 00:11:23.453 "uuid": "23f6e4b7-efc5-49b3-816f-f9d1a058aef9", 00:11:23.453 "is_configured": true, 00:11:23.453 "data_offset": 2048, 00:11:23.453 "data_size": 63488 00:11:23.453 }, 00:11:23.453 { 00:11:23.453 "name": "BaseBdev2", 00:11:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.453 "is_configured": false, 00:11:23.453 "data_offset": 0, 00:11:23.453 "data_size": 0 00:11:23.453 }, 00:11:23.453 { 00:11:23.453 "name": "BaseBdev3", 00:11:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.453 "is_configured": false, 00:11:23.453 "data_offset": 0, 00:11:23.453 "data_size": 0 00:11:23.453 } 00:11:23.453 ] 00:11:23.453 }' 00:11:23.453 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.453 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.712 [2024-11-27 09:48:24.729596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.712 BaseBdev2 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.712 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.712 [ 00:11:23.712 { 00:11:23.712 "name": "BaseBdev2", 00:11:23.712 "aliases": [ 00:11:23.712 "e29ed908-81d6-4fd1-95d4-49d58f20cdd8" 00:11:23.712 ], 00:11:23.712 "product_name": "Malloc disk", 00:11:23.712 "block_size": 512, 00:11:23.712 "num_blocks": 65536, 00:11:23.712 "uuid": "e29ed908-81d6-4fd1-95d4-49d58f20cdd8", 00:11:23.712 "assigned_rate_limits": { 00:11:23.712 "rw_ios_per_sec": 0, 00:11:23.712 "rw_mbytes_per_sec": 0, 00:11:23.712 "r_mbytes_per_sec": 0, 00:11:23.712 "w_mbytes_per_sec": 0 00:11:23.712 }, 00:11:23.712 "claimed": true, 00:11:23.712 "claim_type": "exclusive_write", 00:11:23.712 "zoned": false, 00:11:23.712 "supported_io_types": { 00:11:23.712 "read": true, 00:11:23.713 "write": true, 00:11:23.713 "unmap": true, 00:11:23.713 "flush": true, 00:11:23.713 "reset": true, 00:11:23.713 "nvme_admin": false, 00:11:23.713 "nvme_io": false, 00:11:23.713 "nvme_io_md": false, 00:11:23.713 "write_zeroes": true, 00:11:23.713 "zcopy": true, 00:11:23.713 "get_zone_info": false, 00:11:23.713 "zone_management": false, 00:11:23.713 "zone_append": false, 00:11:23.713 "compare": false, 00:11:23.713 "compare_and_write": false, 00:11:23.713 "abort": true, 00:11:23.713 "seek_hole": false, 00:11:23.713 "seek_data": false, 00:11:23.713 "copy": true, 00:11:23.713 "nvme_iov_md": false 00:11:23.713 }, 00:11:23.713 "memory_domains": [ 00:11:23.713 { 00:11:23.713 "dma_device_id": "system", 00:11:23.713 "dma_device_type": 1 00:11:23.713 }, 00:11:23.713 { 00:11:23.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.713 "dma_device_type": 2 00:11:23.713 } 00:11:23.713 ], 00:11:23.713 "driver_specific": {} 00:11:23.713 } 00:11:23.713 ] 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.713 "name": "Existed_Raid", 00:11:23.713 "uuid": "9647cb8b-f8dd-4b63-8c0d-3d4718958e22", 00:11:23.713 "strip_size_kb": 0, 00:11:23.713 "state": "configuring", 00:11:23.713 "raid_level": "raid1", 00:11:23.713 "superblock": true, 00:11:23.713 "num_base_bdevs": 3, 00:11:23.713 "num_base_bdevs_discovered": 2, 00:11:23.713 "num_base_bdevs_operational": 3, 00:11:23.713 "base_bdevs_list": [ 00:11:23.713 { 00:11:23.713 "name": "BaseBdev1", 00:11:23.713 "uuid": "23f6e4b7-efc5-49b3-816f-f9d1a058aef9", 00:11:23.713 "is_configured": true, 00:11:23.713 "data_offset": 2048, 00:11:23.713 "data_size": 63488 00:11:23.713 }, 00:11:23.713 { 00:11:23.713 "name": "BaseBdev2", 00:11:23.713 "uuid": "e29ed908-81d6-4fd1-95d4-49d58f20cdd8", 00:11:23.713 "is_configured": true, 00:11:23.713 "data_offset": 2048, 00:11:23.713 "data_size": 63488 00:11:23.713 }, 00:11:23.713 { 00:11:23.713 "name": "BaseBdev3", 00:11:23.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.713 "is_configured": false, 00:11:23.713 "data_offset": 0, 00:11:23.713 "data_size": 0 00:11:23.713 } 00:11:23.713 ] 00:11:23.713 }' 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.713 09:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.282 [2024-11-27 09:48:25.258843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.282 [2024-11-27 09:48:25.259205] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.282 [2024-11-27 09:48:25.259233] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.282 [2024-11-27 09:48:25.259567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:24.282 [2024-11-27 09:48:25.259760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.282 [2024-11-27 09:48:25.259770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.282 BaseBdev3 00:11:24.282 [2024-11-27 09:48:25.259927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.282 [ 00:11:24.282 { 00:11:24.282 "name": "BaseBdev3", 00:11:24.282 "aliases": [ 00:11:24.282 "9c3d9770-460e-423a-94ec-f5fa7f688e35" 00:11:24.282 ], 00:11:24.282 "product_name": "Malloc disk", 00:11:24.282 "block_size": 512, 00:11:24.282 "num_blocks": 65536, 00:11:24.282 "uuid": "9c3d9770-460e-423a-94ec-f5fa7f688e35", 00:11:24.282 "assigned_rate_limits": { 00:11:24.282 "rw_ios_per_sec": 0, 00:11:24.282 "rw_mbytes_per_sec": 0, 00:11:24.282 "r_mbytes_per_sec": 0, 00:11:24.282 "w_mbytes_per_sec": 0 00:11:24.282 }, 00:11:24.282 "claimed": true, 00:11:24.282 "claim_type": "exclusive_write", 00:11:24.282 "zoned": false, 00:11:24.282 "supported_io_types": { 00:11:24.282 "read": true, 00:11:24.282 "write": true, 00:11:24.282 "unmap": true, 00:11:24.282 "flush": true, 00:11:24.282 "reset": true, 00:11:24.282 "nvme_admin": false, 00:11:24.282 "nvme_io": false, 00:11:24.282 "nvme_io_md": false, 00:11:24.282 "write_zeroes": true, 00:11:24.282 "zcopy": true, 00:11:24.282 "get_zone_info": false, 00:11:24.282 "zone_management": false, 00:11:24.282 "zone_append": false, 00:11:24.282 "compare": false, 00:11:24.282 "compare_and_write": false, 00:11:24.282 "abort": true, 00:11:24.282 "seek_hole": false, 00:11:24.282 "seek_data": false, 00:11:24.282 "copy": true, 00:11:24.282 "nvme_iov_md": false 00:11:24.282 }, 00:11:24.282 "memory_domains": [ 00:11:24.282 { 00:11:24.282 "dma_device_id": "system", 00:11:24.282 "dma_device_type": 1 00:11:24.282 }, 00:11:24.282 { 00:11:24.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.282 "dma_device_type": 2 00:11:24.282 } 00:11:24.282 ], 00:11:24.282 "driver_specific": {} 00:11:24.282 } 00:11:24.282 ] 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.282 "name": "Existed_Raid", 00:11:24.282 "uuid": "9647cb8b-f8dd-4b63-8c0d-3d4718958e22", 00:11:24.282 "strip_size_kb": 0, 00:11:24.282 "state": "online", 00:11:24.282 "raid_level": "raid1", 00:11:24.282 "superblock": true, 00:11:24.282 "num_base_bdevs": 3, 00:11:24.282 "num_base_bdevs_discovered": 3, 00:11:24.282 "num_base_bdevs_operational": 3, 00:11:24.282 "base_bdevs_list": [ 00:11:24.282 { 00:11:24.282 "name": "BaseBdev1", 00:11:24.282 "uuid": "23f6e4b7-efc5-49b3-816f-f9d1a058aef9", 00:11:24.282 "is_configured": true, 00:11:24.282 "data_offset": 2048, 00:11:24.282 "data_size": 63488 00:11:24.282 }, 00:11:24.282 { 00:11:24.282 "name": "BaseBdev2", 00:11:24.282 "uuid": "e29ed908-81d6-4fd1-95d4-49d58f20cdd8", 00:11:24.282 "is_configured": true, 00:11:24.282 "data_offset": 2048, 00:11:24.282 "data_size": 63488 00:11:24.282 }, 00:11:24.282 { 00:11:24.282 "name": "BaseBdev3", 00:11:24.282 "uuid": "9c3d9770-460e-423a-94ec-f5fa7f688e35", 00:11:24.282 "is_configured": true, 00:11:24.282 "data_offset": 2048, 00:11:24.282 "data_size": 63488 00:11:24.282 } 00:11:24.282 ] 00:11:24.282 }' 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.282 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.850 [2024-11-27 09:48:25.710479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.850 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.850 "name": "Existed_Raid", 00:11:24.850 "aliases": [ 00:11:24.850 "9647cb8b-f8dd-4b63-8c0d-3d4718958e22" 00:11:24.850 ], 00:11:24.850 "product_name": "Raid Volume", 00:11:24.850 "block_size": 512, 00:11:24.850 "num_blocks": 63488, 00:11:24.850 "uuid": "9647cb8b-f8dd-4b63-8c0d-3d4718958e22", 00:11:24.850 "assigned_rate_limits": { 00:11:24.850 "rw_ios_per_sec": 0, 00:11:24.850 "rw_mbytes_per_sec": 0, 00:11:24.850 "r_mbytes_per_sec": 0, 00:11:24.850 "w_mbytes_per_sec": 0 00:11:24.850 }, 00:11:24.850 "claimed": false, 00:11:24.850 "zoned": false, 00:11:24.850 "supported_io_types": { 00:11:24.850 "read": true, 00:11:24.850 "write": true, 00:11:24.850 "unmap": false, 00:11:24.850 "flush": false, 00:11:24.850 "reset": true, 00:11:24.850 "nvme_admin": false, 00:11:24.850 "nvme_io": false, 00:11:24.850 "nvme_io_md": false, 00:11:24.850 "write_zeroes": true, 00:11:24.850 "zcopy": false, 00:11:24.850 "get_zone_info": false, 00:11:24.850 "zone_management": false, 00:11:24.850 "zone_append": false, 00:11:24.850 "compare": false, 00:11:24.850 "compare_and_write": false, 00:11:24.850 "abort": false, 00:11:24.850 "seek_hole": false, 00:11:24.850 "seek_data": false, 00:11:24.850 "copy": false, 00:11:24.850 "nvme_iov_md": false 00:11:24.850 }, 00:11:24.850 "memory_domains": [ 00:11:24.850 { 00:11:24.850 "dma_device_id": "system", 00:11:24.850 "dma_device_type": 1 00:11:24.850 }, 00:11:24.850 { 00:11:24.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.851 "dma_device_type": 2 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "dma_device_id": "system", 00:11:24.851 "dma_device_type": 1 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.851 "dma_device_type": 2 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "dma_device_id": "system", 00:11:24.851 "dma_device_type": 1 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.851 "dma_device_type": 2 00:11:24.851 } 00:11:24.851 ], 00:11:24.851 "driver_specific": { 00:11:24.851 "raid": { 00:11:24.851 "uuid": "9647cb8b-f8dd-4b63-8c0d-3d4718958e22", 00:11:24.851 "strip_size_kb": 0, 00:11:24.851 "state": "online", 00:11:24.851 "raid_level": "raid1", 00:11:24.851 "superblock": true, 00:11:24.851 "num_base_bdevs": 3, 00:11:24.851 "num_base_bdevs_discovered": 3, 00:11:24.851 "num_base_bdevs_operational": 3, 00:11:24.851 "base_bdevs_list": [ 00:11:24.851 { 00:11:24.851 "name": "BaseBdev1", 00:11:24.851 "uuid": "23f6e4b7-efc5-49b3-816f-f9d1a058aef9", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "name": "BaseBdev2", 00:11:24.851 "uuid": "e29ed908-81d6-4fd1-95d4-49d58f20cdd8", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 }, 00:11:24.851 { 00:11:24.851 "name": "BaseBdev3", 00:11:24.851 "uuid": "9c3d9770-460e-423a-94ec-f5fa7f688e35", 00:11:24.851 "is_configured": true, 00:11:24.851 "data_offset": 2048, 00:11:24.851 "data_size": 63488 00:11:24.851 } 00:11:24.851 ] 00:11:24.851 } 00:11:24.851 } 00:11:24.851 }' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.851 BaseBdev2 00:11:24.851 BaseBdev3' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.851 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.112 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.112 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.112 09:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.112 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.113 09:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.113 [2024-11-27 09:48:25.989696] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.113 "name": "Existed_Raid", 00:11:25.113 "uuid": "9647cb8b-f8dd-4b63-8c0d-3d4718958e22", 00:11:25.113 "strip_size_kb": 0, 00:11:25.113 "state": "online", 00:11:25.113 "raid_level": "raid1", 00:11:25.113 "superblock": true, 00:11:25.113 "num_base_bdevs": 3, 00:11:25.113 "num_base_bdevs_discovered": 2, 00:11:25.113 "num_base_bdevs_operational": 2, 00:11:25.113 "base_bdevs_list": [ 00:11:25.113 { 00:11:25.113 "name": null, 00:11:25.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.113 "is_configured": false, 00:11:25.113 "data_offset": 0, 00:11:25.113 "data_size": 63488 00:11:25.113 }, 00:11:25.113 { 00:11:25.113 "name": "BaseBdev2", 00:11:25.113 "uuid": "e29ed908-81d6-4fd1-95d4-49d58f20cdd8", 00:11:25.113 "is_configured": true, 00:11:25.113 "data_offset": 2048, 00:11:25.113 "data_size": 63488 00:11:25.113 }, 00:11:25.113 { 00:11:25.113 "name": "BaseBdev3", 00:11:25.113 "uuid": "9c3d9770-460e-423a-94ec-f5fa7f688e35", 00:11:25.113 "is_configured": true, 00:11:25.113 "data_offset": 2048, 00:11:25.113 "data_size": 63488 00:11:25.113 } 00:11:25.113 ] 00:11:25.113 }' 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.113 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 [2024-11-27 09:48:26.611714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.720 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.720 [2024-11-27 09:48:26.775602] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:25.720 [2024-11-27 09:48:26.775796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.981 [2024-11-27 09:48:26.880794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.981 [2024-11-27 09:48:26.880860] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.981 [2024-11-27 09:48:26.880873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 BaseBdev2 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 09:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 [ 00:11:25.981 { 00:11:25.981 "name": "BaseBdev2", 00:11:25.981 "aliases": [ 00:11:25.981 "5bc6d094-ba6d-467e-9b97-f78b0503ac67" 00:11:25.981 ], 00:11:25.981 "product_name": "Malloc disk", 00:11:25.981 "block_size": 512, 00:11:25.981 "num_blocks": 65536, 00:11:25.981 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:25.981 "assigned_rate_limits": { 00:11:25.981 "rw_ios_per_sec": 0, 00:11:25.981 "rw_mbytes_per_sec": 0, 00:11:25.981 "r_mbytes_per_sec": 0, 00:11:25.981 "w_mbytes_per_sec": 0 00:11:25.981 }, 00:11:25.981 "claimed": false, 00:11:25.981 "zoned": false, 00:11:25.981 "supported_io_types": { 00:11:25.981 "read": true, 00:11:25.981 "write": true, 00:11:25.981 "unmap": true, 00:11:25.981 "flush": true, 00:11:25.981 "reset": true, 00:11:25.981 "nvme_admin": false, 00:11:25.981 "nvme_io": false, 00:11:25.981 "nvme_io_md": false, 00:11:25.981 "write_zeroes": true, 00:11:25.981 "zcopy": true, 00:11:25.981 "get_zone_info": false, 00:11:25.981 "zone_management": false, 00:11:25.981 "zone_append": false, 00:11:25.981 "compare": false, 00:11:25.981 "compare_and_write": false, 00:11:25.981 "abort": true, 00:11:25.981 "seek_hole": false, 00:11:25.981 "seek_data": false, 00:11:25.981 "copy": true, 00:11:25.981 "nvme_iov_md": false 00:11:25.981 }, 00:11:25.981 "memory_domains": [ 00:11:25.981 { 00:11:25.981 "dma_device_id": "system", 00:11:25.981 "dma_device_type": 1 00:11:25.981 }, 00:11:25.981 { 00:11:25.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.981 "dma_device_type": 2 00:11:25.981 } 00:11:25.981 ], 00:11:25.981 "driver_specific": {} 00:11:25.981 } 00:11:25.981 ] 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 BaseBdev3 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.981 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.982 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:25.982 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.982 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.982 [ 00:11:25.982 { 00:11:25.982 "name": "BaseBdev3", 00:11:25.982 "aliases": [ 00:11:25.982 "3f451d68-dfdf-49b5-a255-65a32b5514a2" 00:11:25.982 ], 00:11:25.982 "product_name": "Malloc disk", 00:11:25.982 "block_size": 512, 00:11:25.982 "num_blocks": 65536, 00:11:25.982 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:25.982 "assigned_rate_limits": { 00:11:25.982 "rw_ios_per_sec": 0, 00:11:25.982 "rw_mbytes_per_sec": 0, 00:11:25.982 "r_mbytes_per_sec": 0, 00:11:25.982 "w_mbytes_per_sec": 0 00:11:25.982 }, 00:11:25.982 "claimed": false, 00:11:25.982 "zoned": false, 00:11:25.982 "supported_io_types": { 00:11:25.982 "read": true, 00:11:25.982 "write": true, 00:11:25.982 "unmap": true, 00:11:25.982 "flush": true, 00:11:25.982 "reset": true, 00:11:25.982 "nvme_admin": false, 00:11:25.982 "nvme_io": false, 00:11:25.982 "nvme_io_md": false, 00:11:25.982 "write_zeroes": true, 00:11:25.982 "zcopy": true, 00:11:25.982 "get_zone_info": false, 00:11:25.982 "zone_management": false, 00:11:25.982 "zone_append": false, 00:11:25.982 "compare": false, 00:11:25.982 "compare_and_write": false, 00:11:25.982 "abort": true, 00:11:25.982 "seek_hole": false, 00:11:25.982 "seek_data": false, 00:11:25.982 "copy": true, 00:11:25.982 "nvme_iov_md": false 00:11:25.982 }, 00:11:25.982 "memory_domains": [ 00:11:25.982 { 00:11:25.982 "dma_device_id": "system", 00:11:25.982 "dma_device_type": 1 00:11:25.982 }, 00:11:25.982 { 00:11:25.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.982 "dma_device_type": 2 00:11:25.982 } 00:11:25.982 ], 00:11:26.242 "driver_specific": {} 00:11:26.242 } 00:11:26.242 ] 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.242 [2024-11-27 09:48:27.118680] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.242 [2024-11-27 09:48:27.118735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.242 [2024-11-27 09:48:27.118756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.242 [2024-11-27 09:48:27.120869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.242 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.242 "name": "Existed_Raid", 00:11:26.242 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:26.242 "strip_size_kb": 0, 00:11:26.242 "state": "configuring", 00:11:26.242 "raid_level": "raid1", 00:11:26.242 "superblock": true, 00:11:26.242 "num_base_bdevs": 3, 00:11:26.242 "num_base_bdevs_discovered": 2, 00:11:26.242 "num_base_bdevs_operational": 3, 00:11:26.242 "base_bdevs_list": [ 00:11:26.242 { 00:11:26.242 "name": "BaseBdev1", 00:11:26.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.242 "is_configured": false, 00:11:26.242 "data_offset": 0, 00:11:26.242 "data_size": 0 00:11:26.242 }, 00:11:26.242 { 00:11:26.242 "name": "BaseBdev2", 00:11:26.242 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:26.242 "is_configured": true, 00:11:26.242 "data_offset": 2048, 00:11:26.242 "data_size": 63488 00:11:26.242 }, 00:11:26.242 { 00:11:26.242 "name": "BaseBdev3", 00:11:26.242 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:26.242 "is_configured": true, 00:11:26.242 "data_offset": 2048, 00:11:26.242 "data_size": 63488 00:11:26.242 } 00:11:26.242 ] 00:11:26.242 }' 00:11:26.243 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.243 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.502 [2024-11-27 09:48:27.542029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.502 "name": "Existed_Raid", 00:11:26.502 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:26.502 "strip_size_kb": 0, 00:11:26.502 "state": "configuring", 00:11:26.502 "raid_level": "raid1", 00:11:26.502 "superblock": true, 00:11:26.502 "num_base_bdevs": 3, 00:11:26.502 "num_base_bdevs_discovered": 1, 00:11:26.502 "num_base_bdevs_operational": 3, 00:11:26.502 "base_bdevs_list": [ 00:11:26.502 { 00:11:26.502 "name": "BaseBdev1", 00:11:26.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.502 "is_configured": false, 00:11:26.502 "data_offset": 0, 00:11:26.502 "data_size": 0 00:11:26.502 }, 00:11:26.502 { 00:11:26.502 "name": null, 00:11:26.502 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:26.502 "is_configured": false, 00:11:26.502 "data_offset": 0, 00:11:26.502 "data_size": 63488 00:11:26.502 }, 00:11:26.502 { 00:11:26.502 "name": "BaseBdev3", 00:11:26.502 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:26.502 "is_configured": true, 00:11:26.502 "data_offset": 2048, 00:11:26.502 "data_size": 63488 00:11:26.502 } 00:11:26.502 ] 00:11:26.502 }' 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.502 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.073 09:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.073 [2024-11-27 09:48:28.032467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.073 BaseBdev1 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.073 [ 00:11:27.073 { 00:11:27.073 "name": "BaseBdev1", 00:11:27.073 "aliases": [ 00:11:27.073 "feed22be-c4df-4648-b3e3-066441bd96c5" 00:11:27.073 ], 00:11:27.073 "product_name": "Malloc disk", 00:11:27.073 "block_size": 512, 00:11:27.073 "num_blocks": 65536, 00:11:27.073 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:27.073 "assigned_rate_limits": { 00:11:27.073 "rw_ios_per_sec": 0, 00:11:27.073 "rw_mbytes_per_sec": 0, 00:11:27.073 "r_mbytes_per_sec": 0, 00:11:27.073 "w_mbytes_per_sec": 0 00:11:27.073 }, 00:11:27.073 "claimed": true, 00:11:27.073 "claim_type": "exclusive_write", 00:11:27.073 "zoned": false, 00:11:27.073 "supported_io_types": { 00:11:27.073 "read": true, 00:11:27.073 "write": true, 00:11:27.073 "unmap": true, 00:11:27.073 "flush": true, 00:11:27.073 "reset": true, 00:11:27.073 "nvme_admin": false, 00:11:27.073 "nvme_io": false, 00:11:27.073 "nvme_io_md": false, 00:11:27.073 "write_zeroes": true, 00:11:27.073 "zcopy": true, 00:11:27.073 "get_zone_info": false, 00:11:27.073 "zone_management": false, 00:11:27.073 "zone_append": false, 00:11:27.073 "compare": false, 00:11:27.073 "compare_and_write": false, 00:11:27.073 "abort": true, 00:11:27.073 "seek_hole": false, 00:11:27.073 "seek_data": false, 00:11:27.073 "copy": true, 00:11:27.073 "nvme_iov_md": false 00:11:27.073 }, 00:11:27.073 "memory_domains": [ 00:11:27.073 { 00:11:27.073 "dma_device_id": "system", 00:11:27.073 "dma_device_type": 1 00:11:27.073 }, 00:11:27.073 { 00:11:27.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.073 "dma_device_type": 2 00:11:27.073 } 00:11:27.073 ], 00:11:27.073 "driver_specific": {} 00:11:27.073 } 00:11:27.073 ] 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.073 "name": "Existed_Raid", 00:11:27.073 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:27.073 "strip_size_kb": 0, 00:11:27.073 "state": "configuring", 00:11:27.073 "raid_level": "raid1", 00:11:27.073 "superblock": true, 00:11:27.073 "num_base_bdevs": 3, 00:11:27.073 "num_base_bdevs_discovered": 2, 00:11:27.073 "num_base_bdevs_operational": 3, 00:11:27.073 "base_bdevs_list": [ 00:11:27.073 { 00:11:27.073 "name": "BaseBdev1", 00:11:27.073 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:27.073 "is_configured": true, 00:11:27.073 "data_offset": 2048, 00:11:27.073 "data_size": 63488 00:11:27.073 }, 00:11:27.073 { 00:11:27.073 "name": null, 00:11:27.073 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:27.073 "is_configured": false, 00:11:27.073 "data_offset": 0, 00:11:27.073 "data_size": 63488 00:11:27.073 }, 00:11:27.073 { 00:11:27.073 "name": "BaseBdev3", 00:11:27.073 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:27.073 "is_configured": true, 00:11:27.073 "data_offset": 2048, 00:11:27.073 "data_size": 63488 00:11:27.073 } 00:11:27.073 ] 00:11:27.073 }' 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.073 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.334 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.334 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.334 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.334 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.334 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.594 [2024-11-27 09:48:28.483764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.594 "name": "Existed_Raid", 00:11:27.594 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:27.594 "strip_size_kb": 0, 00:11:27.594 "state": "configuring", 00:11:27.594 "raid_level": "raid1", 00:11:27.594 "superblock": true, 00:11:27.594 "num_base_bdevs": 3, 00:11:27.594 "num_base_bdevs_discovered": 1, 00:11:27.594 "num_base_bdevs_operational": 3, 00:11:27.594 "base_bdevs_list": [ 00:11:27.594 { 00:11:27.594 "name": "BaseBdev1", 00:11:27.594 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:27.594 "is_configured": true, 00:11:27.594 "data_offset": 2048, 00:11:27.594 "data_size": 63488 00:11:27.594 }, 00:11:27.594 { 00:11:27.594 "name": null, 00:11:27.594 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:27.594 "is_configured": false, 00:11:27.594 "data_offset": 0, 00:11:27.594 "data_size": 63488 00:11:27.594 }, 00:11:27.594 { 00:11:27.594 "name": null, 00:11:27.594 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:27.594 "is_configured": false, 00:11:27.594 "data_offset": 0, 00:11:27.594 "data_size": 63488 00:11:27.594 } 00:11:27.594 ] 00:11:27.594 }' 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.594 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.854 [2024-11-27 09:48:28.943055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.854 09:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.114 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.114 "name": "Existed_Raid", 00:11:28.114 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:28.114 "strip_size_kb": 0, 00:11:28.114 "state": "configuring", 00:11:28.114 "raid_level": "raid1", 00:11:28.114 "superblock": true, 00:11:28.114 "num_base_bdevs": 3, 00:11:28.114 "num_base_bdevs_discovered": 2, 00:11:28.114 "num_base_bdevs_operational": 3, 00:11:28.114 "base_bdevs_list": [ 00:11:28.114 { 00:11:28.114 "name": "BaseBdev1", 00:11:28.114 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:28.114 "is_configured": true, 00:11:28.114 "data_offset": 2048, 00:11:28.114 "data_size": 63488 00:11:28.114 }, 00:11:28.114 { 00:11:28.114 "name": null, 00:11:28.114 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:28.114 "is_configured": false, 00:11:28.114 "data_offset": 0, 00:11:28.114 "data_size": 63488 00:11:28.114 }, 00:11:28.114 { 00:11:28.114 "name": "BaseBdev3", 00:11:28.114 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:28.114 "is_configured": true, 00:11:28.114 "data_offset": 2048, 00:11:28.114 "data_size": 63488 00:11:28.114 } 00:11:28.114 ] 00:11:28.114 }' 00:11:28.114 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.114 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.374 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.374 [2024-11-27 09:48:29.402245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.634 "name": "Existed_Raid", 00:11:28.634 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:28.634 "strip_size_kb": 0, 00:11:28.634 "state": "configuring", 00:11:28.634 "raid_level": "raid1", 00:11:28.634 "superblock": true, 00:11:28.634 "num_base_bdevs": 3, 00:11:28.634 "num_base_bdevs_discovered": 1, 00:11:28.634 "num_base_bdevs_operational": 3, 00:11:28.634 "base_bdevs_list": [ 00:11:28.634 { 00:11:28.634 "name": null, 00:11:28.634 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:28.634 "is_configured": false, 00:11:28.634 "data_offset": 0, 00:11:28.634 "data_size": 63488 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": null, 00:11:28.634 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:28.634 "is_configured": false, 00:11:28.634 "data_offset": 0, 00:11:28.634 "data_size": 63488 00:11:28.634 }, 00:11:28.634 { 00:11:28.634 "name": "BaseBdev3", 00:11:28.634 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:28.634 "is_configured": true, 00:11:28.634 "data_offset": 2048, 00:11:28.634 "data_size": 63488 00:11:28.634 } 00:11:28.634 ] 00:11:28.634 }' 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.634 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.895 [2024-11-27 09:48:29.980937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.895 09:48:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.895 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.155 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.155 "name": "Existed_Raid", 00:11:29.155 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:29.155 "strip_size_kb": 0, 00:11:29.155 "state": "configuring", 00:11:29.155 "raid_level": "raid1", 00:11:29.155 "superblock": true, 00:11:29.155 "num_base_bdevs": 3, 00:11:29.155 "num_base_bdevs_discovered": 2, 00:11:29.155 "num_base_bdevs_operational": 3, 00:11:29.155 "base_bdevs_list": [ 00:11:29.155 { 00:11:29.155 "name": null, 00:11:29.155 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:29.155 "is_configured": false, 00:11:29.155 "data_offset": 0, 00:11:29.155 "data_size": 63488 00:11:29.155 }, 00:11:29.155 { 00:11:29.155 "name": "BaseBdev2", 00:11:29.155 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:29.155 "is_configured": true, 00:11:29.155 "data_offset": 2048, 00:11:29.155 "data_size": 63488 00:11:29.155 }, 00:11:29.155 { 00:11:29.155 "name": "BaseBdev3", 00:11:29.155 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:29.155 "is_configured": true, 00:11:29.155 "data_offset": 2048, 00:11:29.155 "data_size": 63488 00:11:29.155 } 00:11:29.155 ] 00:11:29.155 }' 00:11:29.155 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.155 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u feed22be-c4df-4648-b3e3-066441bd96c5 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.415 [2024-11-27 09:48:30.459261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:29.415 [2024-11-27 09:48:30.459535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:29.415 [2024-11-27 09:48:30.459561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.415 [2024-11-27 09:48:30.459863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:29.415 [2024-11-27 09:48:30.460049] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:29.415 [2024-11-27 09:48:30.460066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:29.415 [2024-11-27 09:48:30.460232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.415 NewBaseBdev 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.415 [ 00:11:29.415 { 00:11:29.415 "name": "NewBaseBdev", 00:11:29.415 "aliases": [ 00:11:29.415 "feed22be-c4df-4648-b3e3-066441bd96c5" 00:11:29.415 ], 00:11:29.415 "product_name": "Malloc disk", 00:11:29.415 "block_size": 512, 00:11:29.415 "num_blocks": 65536, 00:11:29.415 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:29.415 "assigned_rate_limits": { 00:11:29.415 "rw_ios_per_sec": 0, 00:11:29.415 "rw_mbytes_per_sec": 0, 00:11:29.415 "r_mbytes_per_sec": 0, 00:11:29.415 "w_mbytes_per_sec": 0 00:11:29.415 }, 00:11:29.415 "claimed": true, 00:11:29.415 "claim_type": "exclusive_write", 00:11:29.415 "zoned": false, 00:11:29.415 "supported_io_types": { 00:11:29.415 "read": true, 00:11:29.415 "write": true, 00:11:29.415 "unmap": true, 00:11:29.415 "flush": true, 00:11:29.415 "reset": true, 00:11:29.415 "nvme_admin": false, 00:11:29.415 "nvme_io": false, 00:11:29.415 "nvme_io_md": false, 00:11:29.415 "write_zeroes": true, 00:11:29.415 "zcopy": true, 00:11:29.415 "get_zone_info": false, 00:11:29.415 "zone_management": false, 00:11:29.415 "zone_append": false, 00:11:29.415 "compare": false, 00:11:29.415 "compare_and_write": false, 00:11:29.415 "abort": true, 00:11:29.415 "seek_hole": false, 00:11:29.415 "seek_data": false, 00:11:29.415 "copy": true, 00:11:29.415 "nvme_iov_md": false 00:11:29.415 }, 00:11:29.415 "memory_domains": [ 00:11:29.415 { 00:11:29.415 "dma_device_id": "system", 00:11:29.415 "dma_device_type": 1 00:11:29.415 }, 00:11:29.415 { 00:11:29.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.415 "dma_device_type": 2 00:11:29.415 } 00:11:29.415 ], 00:11:29.415 "driver_specific": {} 00:11:29.415 } 00:11:29.415 ] 00:11:29.415 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.416 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.675 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.675 "name": "Existed_Raid", 00:11:29.675 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:29.675 "strip_size_kb": 0, 00:11:29.675 "state": "online", 00:11:29.675 "raid_level": "raid1", 00:11:29.675 "superblock": true, 00:11:29.675 "num_base_bdevs": 3, 00:11:29.675 "num_base_bdevs_discovered": 3, 00:11:29.675 "num_base_bdevs_operational": 3, 00:11:29.675 "base_bdevs_list": [ 00:11:29.675 { 00:11:29.675 "name": "NewBaseBdev", 00:11:29.675 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:29.675 "is_configured": true, 00:11:29.675 "data_offset": 2048, 00:11:29.675 "data_size": 63488 00:11:29.675 }, 00:11:29.675 { 00:11:29.675 "name": "BaseBdev2", 00:11:29.675 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:29.675 "is_configured": true, 00:11:29.675 "data_offset": 2048, 00:11:29.675 "data_size": 63488 00:11:29.675 }, 00:11:29.675 { 00:11:29.675 "name": "BaseBdev3", 00:11:29.675 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:29.675 "is_configured": true, 00:11:29.675 "data_offset": 2048, 00:11:29.675 "data_size": 63488 00:11:29.675 } 00:11:29.675 ] 00:11:29.675 }' 00:11:29.675 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.675 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.935 [2024-11-27 09:48:30.894837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.935 "name": "Existed_Raid", 00:11:29.935 "aliases": [ 00:11:29.935 "17700090-29ed-43ef-8d6e-789f8298169f" 00:11:29.935 ], 00:11:29.935 "product_name": "Raid Volume", 00:11:29.935 "block_size": 512, 00:11:29.935 "num_blocks": 63488, 00:11:29.935 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:29.935 "assigned_rate_limits": { 00:11:29.935 "rw_ios_per_sec": 0, 00:11:29.935 "rw_mbytes_per_sec": 0, 00:11:29.935 "r_mbytes_per_sec": 0, 00:11:29.935 "w_mbytes_per_sec": 0 00:11:29.935 }, 00:11:29.935 "claimed": false, 00:11:29.935 "zoned": false, 00:11:29.935 "supported_io_types": { 00:11:29.935 "read": true, 00:11:29.935 "write": true, 00:11:29.935 "unmap": false, 00:11:29.935 "flush": false, 00:11:29.935 "reset": true, 00:11:29.935 "nvme_admin": false, 00:11:29.935 "nvme_io": false, 00:11:29.935 "nvme_io_md": false, 00:11:29.935 "write_zeroes": true, 00:11:29.935 "zcopy": false, 00:11:29.935 "get_zone_info": false, 00:11:29.935 "zone_management": false, 00:11:29.935 "zone_append": false, 00:11:29.935 "compare": false, 00:11:29.935 "compare_and_write": false, 00:11:29.935 "abort": false, 00:11:29.935 "seek_hole": false, 00:11:29.935 "seek_data": false, 00:11:29.935 "copy": false, 00:11:29.935 "nvme_iov_md": false 00:11:29.935 }, 00:11:29.935 "memory_domains": [ 00:11:29.935 { 00:11:29.935 "dma_device_id": "system", 00:11:29.935 "dma_device_type": 1 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.935 "dma_device_type": 2 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "dma_device_id": "system", 00:11:29.935 "dma_device_type": 1 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.935 "dma_device_type": 2 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "dma_device_id": "system", 00:11:29.935 "dma_device_type": 1 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.935 "dma_device_type": 2 00:11:29.935 } 00:11:29.935 ], 00:11:29.935 "driver_specific": { 00:11:29.935 "raid": { 00:11:29.935 "uuid": "17700090-29ed-43ef-8d6e-789f8298169f", 00:11:29.935 "strip_size_kb": 0, 00:11:29.935 "state": "online", 00:11:29.935 "raid_level": "raid1", 00:11:29.935 "superblock": true, 00:11:29.935 "num_base_bdevs": 3, 00:11:29.935 "num_base_bdevs_discovered": 3, 00:11:29.935 "num_base_bdevs_operational": 3, 00:11:29.935 "base_bdevs_list": [ 00:11:29.935 { 00:11:29.935 "name": "NewBaseBdev", 00:11:29.935 "uuid": "feed22be-c4df-4648-b3e3-066441bd96c5", 00:11:29.935 "is_configured": true, 00:11:29.935 "data_offset": 2048, 00:11:29.935 "data_size": 63488 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "name": "BaseBdev2", 00:11:29.935 "uuid": "5bc6d094-ba6d-467e-9b97-f78b0503ac67", 00:11:29.935 "is_configured": true, 00:11:29.935 "data_offset": 2048, 00:11:29.935 "data_size": 63488 00:11:29.935 }, 00:11:29.935 { 00:11:29.935 "name": "BaseBdev3", 00:11:29.935 "uuid": "3f451d68-dfdf-49b5-a255-65a32b5514a2", 00:11:29.935 "is_configured": true, 00:11:29.935 "data_offset": 2048, 00:11:29.935 "data_size": 63488 00:11:29.935 } 00:11:29.935 ] 00:11:29.935 } 00:11:29.935 } 00:11:29.935 }' 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:29.935 BaseBdev2 00:11:29.935 BaseBdev3' 00:11:29.935 09:48:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.935 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.936 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.195 [2024-11-27 09:48:31.142114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.195 [2024-11-27 09:48:31.142154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.195 [2024-11-27 09:48:31.142235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.195 [2024-11-27 09:48:31.142573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.195 [2024-11-27 09:48:31.142593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68285 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68285 ']' 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68285 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68285 00:11:30.195 killing process with pid 68285 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.195 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68285' 00:11:30.196 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68285 00:11:30.196 [2024-11-27 09:48:31.187584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.196 09:48:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68285 00:11:30.455 [2024-11-27 09:48:31.516146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.835 09:48:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:31.836 00:11:31.836 real 0m10.334s 00:11:31.836 user 0m16.103s 00:11:31.836 sys 0m1.878s 00:11:31.836 09:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.836 09:48:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 ************************************ 00:11:31.836 END TEST raid_state_function_test_sb 00:11:31.836 ************************************ 00:11:31.836 09:48:32 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:31.836 09:48:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:31.836 09:48:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.836 09:48:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 ************************************ 00:11:31.836 START TEST raid_superblock_test 00:11:31.836 ************************************ 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68911 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68911 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68911 ']' 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.836 09:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.836 [2024-11-27 09:48:32.885913] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:31.836 [2024-11-27 09:48:32.886077] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68911 ] 00:11:32.095 [2024-11-27 09:48:33.063263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.095 [2024-11-27 09:48:33.201587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.353 [2024-11-27 09:48:33.429300] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.353 [2024-11-27 09:48:33.429354] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.612 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.872 malloc1 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.872 [2024-11-27 09:48:33.779342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:32.872 [2024-11-27 09:48:33.779406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.872 [2024-11-27 09:48:33.779430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:32.872 [2024-11-27 09:48:33.779440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.872 [2024-11-27 09:48:33.781935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.872 [2024-11-27 09:48:33.781969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:32.872 pt1 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.872 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.873 malloc2 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.873 [2024-11-27 09:48:33.839660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:32.873 [2024-11-27 09:48:33.839717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.873 [2024-11-27 09:48:33.839746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:32.873 [2024-11-27 09:48:33.839755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.873 [2024-11-27 09:48:33.842249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.873 [2024-11-27 09:48:33.842280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:32.873 pt2 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.873 malloc3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.873 [2024-11-27 09:48:33.912926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:32.873 [2024-11-27 09:48:33.912987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.873 [2024-11-27 09:48:33.913023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:32.873 [2024-11-27 09:48:33.913033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.873 [2024-11-27 09:48:33.915477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.873 [2024-11-27 09:48:33.915515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:32.873 pt3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.873 [2024-11-27 09:48:33.924951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:32.873 [2024-11-27 09:48:33.927076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:32.873 [2024-11-27 09:48:33.927152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:32.873 [2024-11-27 09:48:33.927326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:32.873 [2024-11-27 09:48:33.927367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.873 [2024-11-27 09:48:33.927635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:32.873 [2024-11-27 09:48:33.927838] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:32.873 [2024-11-27 09:48:33.927860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:32.873 [2024-11-27 09:48:33.928056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.873 "name": "raid_bdev1", 00:11:32.873 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:32.873 "strip_size_kb": 0, 00:11:32.873 "state": "online", 00:11:32.873 "raid_level": "raid1", 00:11:32.873 "superblock": true, 00:11:32.873 "num_base_bdevs": 3, 00:11:32.873 "num_base_bdevs_discovered": 3, 00:11:32.873 "num_base_bdevs_operational": 3, 00:11:32.873 "base_bdevs_list": [ 00:11:32.873 { 00:11:32.873 "name": "pt1", 00:11:32.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:32.873 "is_configured": true, 00:11:32.873 "data_offset": 2048, 00:11:32.873 "data_size": 63488 00:11:32.873 }, 00:11:32.873 { 00:11:32.873 "name": "pt2", 00:11:32.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:32.873 "is_configured": true, 00:11:32.873 "data_offset": 2048, 00:11:32.873 "data_size": 63488 00:11:32.873 }, 00:11:32.873 { 00:11:32.873 "name": "pt3", 00:11:32.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:32.873 "is_configured": true, 00:11:32.873 "data_offset": 2048, 00:11:32.873 "data_size": 63488 00:11:32.873 } 00:11:32.873 ] 00:11:32.873 }' 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.873 09:48:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.443 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:33.443 [2024-11-27 09:48:34.348602] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:33.444 "name": "raid_bdev1", 00:11:33.444 "aliases": [ 00:11:33.444 "8afa455d-5a5d-4bbd-9610-42a839683b04" 00:11:33.444 ], 00:11:33.444 "product_name": "Raid Volume", 00:11:33.444 "block_size": 512, 00:11:33.444 "num_blocks": 63488, 00:11:33.444 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:33.444 "assigned_rate_limits": { 00:11:33.444 "rw_ios_per_sec": 0, 00:11:33.444 "rw_mbytes_per_sec": 0, 00:11:33.444 "r_mbytes_per_sec": 0, 00:11:33.444 "w_mbytes_per_sec": 0 00:11:33.444 }, 00:11:33.444 "claimed": false, 00:11:33.444 "zoned": false, 00:11:33.444 "supported_io_types": { 00:11:33.444 "read": true, 00:11:33.444 "write": true, 00:11:33.444 "unmap": false, 00:11:33.444 "flush": false, 00:11:33.444 "reset": true, 00:11:33.444 "nvme_admin": false, 00:11:33.444 "nvme_io": false, 00:11:33.444 "nvme_io_md": false, 00:11:33.444 "write_zeroes": true, 00:11:33.444 "zcopy": false, 00:11:33.444 "get_zone_info": false, 00:11:33.444 "zone_management": false, 00:11:33.444 "zone_append": false, 00:11:33.444 "compare": false, 00:11:33.444 "compare_and_write": false, 00:11:33.444 "abort": false, 00:11:33.444 "seek_hole": false, 00:11:33.444 "seek_data": false, 00:11:33.444 "copy": false, 00:11:33.444 "nvme_iov_md": false 00:11:33.444 }, 00:11:33.444 "memory_domains": [ 00:11:33.444 { 00:11:33.444 "dma_device_id": "system", 00:11:33.444 "dma_device_type": 1 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.444 "dma_device_type": 2 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "dma_device_id": "system", 00:11:33.444 "dma_device_type": 1 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.444 "dma_device_type": 2 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "dma_device_id": "system", 00:11:33.444 "dma_device_type": 1 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.444 "dma_device_type": 2 00:11:33.444 } 00:11:33.444 ], 00:11:33.444 "driver_specific": { 00:11:33.444 "raid": { 00:11:33.444 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:33.444 "strip_size_kb": 0, 00:11:33.444 "state": "online", 00:11:33.444 "raid_level": "raid1", 00:11:33.444 "superblock": true, 00:11:33.444 "num_base_bdevs": 3, 00:11:33.444 "num_base_bdevs_discovered": 3, 00:11:33.444 "num_base_bdevs_operational": 3, 00:11:33.444 "base_bdevs_list": [ 00:11:33.444 { 00:11:33.444 "name": "pt1", 00:11:33.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.444 "is_configured": true, 00:11:33.444 "data_offset": 2048, 00:11:33.444 "data_size": 63488 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "name": "pt2", 00:11:33.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.444 "is_configured": true, 00:11:33.444 "data_offset": 2048, 00:11:33.444 "data_size": 63488 00:11:33.444 }, 00:11:33.444 { 00:11:33.444 "name": "pt3", 00:11:33.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.444 "is_configured": true, 00:11:33.444 "data_offset": 2048, 00:11:33.444 "data_size": 63488 00:11:33.444 } 00:11:33.444 ] 00:11:33.444 } 00:11:33.444 } 00:11:33.444 }' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:33.444 pt2 00:11:33.444 pt3' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.444 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 [2024-11-27 09:48:34.576051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8afa455d-5a5d-4bbd-9610-42a839683b04 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8afa455d-5a5d-4bbd-9610-42a839683b04 ']' 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 [2024-11-27 09:48:34.619707] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.705 [2024-11-27 09:48:34.619746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.705 [2024-11-27 09:48:34.619839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.705 [2024-11-27 09:48:34.619926] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.705 [2024-11-27 09:48:34.619936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.705 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.705 [2024-11-27 09:48:34.755519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:33.705 [2024-11-27 09:48:34.757864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:33.705 [2024-11-27 09:48:34.757952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:33.706 [2024-11-27 09:48:34.758030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:33.706 [2024-11-27 09:48:34.758083] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:33.706 [2024-11-27 09:48:34.758106] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:33.706 [2024-11-27 09:48:34.758126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.706 [2024-11-27 09:48:34.758137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:33.706 request: 00:11:33.706 { 00:11:33.706 "name": "raid_bdev1", 00:11:33.706 "raid_level": "raid1", 00:11:33.706 "base_bdevs": [ 00:11:33.706 "malloc1", 00:11:33.706 "malloc2", 00:11:33.706 "malloc3" 00:11:33.706 ], 00:11:33.706 "superblock": false, 00:11:33.706 "method": "bdev_raid_create", 00:11:33.706 "req_id": 1 00:11:33.706 } 00:11:33.706 Got JSON-RPC error response 00:11:33.706 response: 00:11:33.706 { 00:11:33.706 "code": -17, 00:11:33.706 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:33.706 } 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 [2024-11-27 09:48:34.827342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.706 [2024-11-27 09:48:34.827393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.706 [2024-11-27 09:48:34.827415] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.706 [2024-11-27 09:48:34.827425] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.706 [2024-11-27 09:48:34.830097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.706 [2024-11-27 09:48:34.830129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.706 [2024-11-27 09:48:34.830221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:33.706 [2024-11-27 09:48:34.830276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.706 pt1 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.706 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.966 "name": "raid_bdev1", 00:11:33.966 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:33.966 "strip_size_kb": 0, 00:11:33.966 "state": "configuring", 00:11:33.966 "raid_level": "raid1", 00:11:33.966 "superblock": true, 00:11:33.966 "num_base_bdevs": 3, 00:11:33.966 "num_base_bdevs_discovered": 1, 00:11:33.966 "num_base_bdevs_operational": 3, 00:11:33.966 "base_bdevs_list": [ 00:11:33.966 { 00:11:33.966 "name": "pt1", 00:11:33.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.966 "is_configured": true, 00:11:33.966 "data_offset": 2048, 00:11:33.966 "data_size": 63488 00:11:33.966 }, 00:11:33.966 { 00:11:33.966 "name": null, 00:11:33.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.966 "is_configured": false, 00:11:33.966 "data_offset": 2048, 00:11:33.966 "data_size": 63488 00:11:33.966 }, 00:11:33.966 { 00:11:33.966 "name": null, 00:11:33.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.966 "is_configured": false, 00:11:33.966 "data_offset": 2048, 00:11:33.966 "data_size": 63488 00:11:33.966 } 00:11:33.966 ] 00:11:33.966 }' 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.966 09:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.225 [2024-11-27 09:48:35.162821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.225 [2024-11-27 09:48:35.162893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.225 [2024-11-27 09:48:35.162920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:34.225 [2024-11-27 09:48:35.162931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.225 [2024-11-27 09:48:35.163457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.225 [2024-11-27 09:48:35.163481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.225 [2024-11-27 09:48:35.163602] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.225 [2024-11-27 09:48:35.163636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.225 pt2 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.225 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.226 [2024-11-27 09:48:35.174799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.226 "name": "raid_bdev1", 00:11:34.226 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:34.226 "strip_size_kb": 0, 00:11:34.226 "state": "configuring", 00:11:34.226 "raid_level": "raid1", 00:11:34.226 "superblock": true, 00:11:34.226 "num_base_bdevs": 3, 00:11:34.226 "num_base_bdevs_discovered": 1, 00:11:34.226 "num_base_bdevs_operational": 3, 00:11:34.226 "base_bdevs_list": [ 00:11:34.226 { 00:11:34.226 "name": "pt1", 00:11:34.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.226 "is_configured": true, 00:11:34.226 "data_offset": 2048, 00:11:34.226 "data_size": 63488 00:11:34.226 }, 00:11:34.226 { 00:11:34.226 "name": null, 00:11:34.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.226 "is_configured": false, 00:11:34.226 "data_offset": 0, 00:11:34.226 "data_size": 63488 00:11:34.226 }, 00:11:34.226 { 00:11:34.226 "name": null, 00:11:34.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.226 "is_configured": false, 00:11:34.226 "data_offset": 2048, 00:11:34.226 "data_size": 63488 00:11:34.226 } 00:11:34.226 ] 00:11:34.226 }' 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.226 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.485 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:34.485 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.485 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:34.485 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.485 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.744 [2024-11-27 09:48:35.622078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:34.744 [2024-11-27 09:48:35.622168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.744 [2024-11-27 09:48:35.622192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:34.744 [2024-11-27 09:48:35.622205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.744 [2024-11-27 09:48:35.622764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.744 [2024-11-27 09:48:35.622786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:34.744 [2024-11-27 09:48:35.622888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:34.744 [2024-11-27 09:48:35.622926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:34.744 pt2 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.744 [2024-11-27 09:48:35.630020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:34.744 [2024-11-27 09:48:35.630069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.744 [2024-11-27 09:48:35.630085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.744 [2024-11-27 09:48:35.630096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.744 [2024-11-27 09:48:35.630534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.744 [2024-11-27 09:48:35.630563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:34.744 [2024-11-27 09:48:35.630631] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:34.744 [2024-11-27 09:48:35.630657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:34.744 [2024-11-27 09:48:35.630779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:34.744 [2024-11-27 09:48:35.630798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.744 [2024-11-27 09:48:35.631074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:34.744 [2024-11-27 09:48:35.631243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:34.744 [2024-11-27 09:48:35.631251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:34.744 [2024-11-27 09:48:35.631405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.744 pt3 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.744 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.744 "name": "raid_bdev1", 00:11:34.744 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:34.744 "strip_size_kb": 0, 00:11:34.744 "state": "online", 00:11:34.744 "raid_level": "raid1", 00:11:34.744 "superblock": true, 00:11:34.744 "num_base_bdevs": 3, 00:11:34.744 "num_base_bdevs_discovered": 3, 00:11:34.744 "num_base_bdevs_operational": 3, 00:11:34.745 "base_bdevs_list": [ 00:11:34.745 { 00:11:34.745 "name": "pt1", 00:11:34.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.745 "is_configured": true, 00:11:34.745 "data_offset": 2048, 00:11:34.745 "data_size": 63488 00:11:34.745 }, 00:11:34.745 { 00:11:34.745 "name": "pt2", 00:11:34.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.745 "is_configured": true, 00:11:34.745 "data_offset": 2048, 00:11:34.745 "data_size": 63488 00:11:34.745 }, 00:11:34.745 { 00:11:34.745 "name": "pt3", 00:11:34.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.745 "is_configured": true, 00:11:34.745 "data_offset": 2048, 00:11:34.745 "data_size": 63488 00:11:34.745 } 00:11:34.745 ] 00:11:34.745 }' 00:11:34.745 09:48:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.745 09:48:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.004 [2024-11-27 09:48:36.097578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.004 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.004 "name": "raid_bdev1", 00:11:35.004 "aliases": [ 00:11:35.004 "8afa455d-5a5d-4bbd-9610-42a839683b04" 00:11:35.004 ], 00:11:35.004 "product_name": "Raid Volume", 00:11:35.004 "block_size": 512, 00:11:35.004 "num_blocks": 63488, 00:11:35.004 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:35.004 "assigned_rate_limits": { 00:11:35.004 "rw_ios_per_sec": 0, 00:11:35.004 "rw_mbytes_per_sec": 0, 00:11:35.004 "r_mbytes_per_sec": 0, 00:11:35.004 "w_mbytes_per_sec": 0 00:11:35.004 }, 00:11:35.004 "claimed": false, 00:11:35.004 "zoned": false, 00:11:35.004 "supported_io_types": { 00:11:35.004 "read": true, 00:11:35.004 "write": true, 00:11:35.004 "unmap": false, 00:11:35.004 "flush": false, 00:11:35.004 "reset": true, 00:11:35.004 "nvme_admin": false, 00:11:35.004 "nvme_io": false, 00:11:35.004 "nvme_io_md": false, 00:11:35.004 "write_zeroes": true, 00:11:35.004 "zcopy": false, 00:11:35.004 "get_zone_info": false, 00:11:35.004 "zone_management": false, 00:11:35.004 "zone_append": false, 00:11:35.004 "compare": false, 00:11:35.004 "compare_and_write": false, 00:11:35.004 "abort": false, 00:11:35.004 "seek_hole": false, 00:11:35.004 "seek_data": false, 00:11:35.004 "copy": false, 00:11:35.004 "nvme_iov_md": false 00:11:35.004 }, 00:11:35.004 "memory_domains": [ 00:11:35.004 { 00:11:35.004 "dma_device_id": "system", 00:11:35.004 "dma_device_type": 1 00:11:35.004 }, 00:11:35.004 { 00:11:35.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.004 "dma_device_type": 2 00:11:35.004 }, 00:11:35.004 { 00:11:35.004 "dma_device_id": "system", 00:11:35.004 "dma_device_type": 1 00:11:35.004 }, 00:11:35.004 { 00:11:35.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.004 "dma_device_type": 2 00:11:35.004 }, 00:11:35.004 { 00:11:35.004 "dma_device_id": "system", 00:11:35.004 "dma_device_type": 1 00:11:35.004 }, 00:11:35.004 { 00:11:35.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.004 "dma_device_type": 2 00:11:35.004 } 00:11:35.004 ], 00:11:35.004 "driver_specific": { 00:11:35.004 "raid": { 00:11:35.004 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:35.004 "strip_size_kb": 0, 00:11:35.004 "state": "online", 00:11:35.004 "raid_level": "raid1", 00:11:35.004 "superblock": true, 00:11:35.004 "num_base_bdevs": 3, 00:11:35.004 "num_base_bdevs_discovered": 3, 00:11:35.004 "num_base_bdevs_operational": 3, 00:11:35.004 "base_bdevs_list": [ 00:11:35.004 { 00:11:35.004 "name": "pt1", 00:11:35.004 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.004 "is_configured": true, 00:11:35.004 "data_offset": 2048, 00:11:35.004 "data_size": 63488 00:11:35.004 }, 00:11:35.004 { 00:11:35.004 "name": "pt2", 00:11:35.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.004 "is_configured": true, 00:11:35.004 "data_offset": 2048, 00:11:35.005 "data_size": 63488 00:11:35.005 }, 00:11:35.005 { 00:11:35.005 "name": "pt3", 00:11:35.005 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.005 "is_configured": true, 00:11:35.005 "data_offset": 2048, 00:11:35.005 "data_size": 63488 00:11:35.005 } 00:11:35.005 ] 00:11:35.005 } 00:11:35.005 } 00:11:35.005 }' 00:11:35.005 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:35.264 pt2 00:11:35.264 pt3' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.264 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.265 [2024-11-27 09:48:36.361054] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8afa455d-5a5d-4bbd-9610-42a839683b04 '!=' 8afa455d-5a5d-4bbd-9610-42a839683b04 ']' 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.265 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.265 [2024-11-27 09:48:36.392794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.523 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.524 "name": "raid_bdev1", 00:11:35.524 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:35.524 "strip_size_kb": 0, 00:11:35.524 "state": "online", 00:11:35.524 "raid_level": "raid1", 00:11:35.524 "superblock": true, 00:11:35.524 "num_base_bdevs": 3, 00:11:35.524 "num_base_bdevs_discovered": 2, 00:11:35.524 "num_base_bdevs_operational": 2, 00:11:35.524 "base_bdevs_list": [ 00:11:35.524 { 00:11:35.524 "name": null, 00:11:35.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.524 "is_configured": false, 00:11:35.524 "data_offset": 0, 00:11:35.524 "data_size": 63488 00:11:35.524 }, 00:11:35.524 { 00:11:35.524 "name": "pt2", 00:11:35.524 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.524 "is_configured": true, 00:11:35.524 "data_offset": 2048, 00:11:35.524 "data_size": 63488 00:11:35.524 }, 00:11:35.524 { 00:11:35.524 "name": "pt3", 00:11:35.524 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.524 "is_configured": true, 00:11:35.524 "data_offset": 2048, 00:11:35.524 "data_size": 63488 00:11:35.524 } 00:11:35.524 ] 00:11:35.524 }' 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.524 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.783 [2024-11-27 09:48:36.836041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.783 [2024-11-27 09:48:36.836079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.783 [2024-11-27 09:48:36.836195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.783 [2024-11-27 09:48:36.836280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.783 [2024-11-27 09:48:36.836303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.783 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.783 [2024-11-27 09:48:36.911816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.783 [2024-11-27 09:48:36.911902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.783 [2024-11-27 09:48:36.911923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:35.783 [2024-11-27 09:48:36.911935] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.043 [2024-11-27 09:48:36.914641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.043 [2024-11-27 09:48:36.914683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:36.043 [2024-11-27 09:48:36.914812] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:36.043 [2024-11-27 09:48:36.914888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.043 pt2 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.043 "name": "raid_bdev1", 00:11:36.043 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:36.043 "strip_size_kb": 0, 00:11:36.043 "state": "configuring", 00:11:36.043 "raid_level": "raid1", 00:11:36.043 "superblock": true, 00:11:36.043 "num_base_bdevs": 3, 00:11:36.043 "num_base_bdevs_discovered": 1, 00:11:36.043 "num_base_bdevs_operational": 2, 00:11:36.043 "base_bdevs_list": [ 00:11:36.043 { 00:11:36.043 "name": null, 00:11:36.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.043 "is_configured": false, 00:11:36.043 "data_offset": 2048, 00:11:36.043 "data_size": 63488 00:11:36.043 }, 00:11:36.043 { 00:11:36.043 "name": "pt2", 00:11:36.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.043 "is_configured": true, 00:11:36.043 "data_offset": 2048, 00:11:36.043 "data_size": 63488 00:11:36.043 }, 00:11:36.043 { 00:11:36.043 "name": null, 00:11:36.043 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.043 "is_configured": false, 00:11:36.043 "data_offset": 2048, 00:11:36.043 "data_size": 63488 00:11:36.043 } 00:11:36.043 ] 00:11:36.043 }' 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.043 09:48:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.361 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:36.361 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:36.361 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:36.361 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:36.361 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.361 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.361 [2024-11-27 09:48:37.319161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:36.361 [2024-11-27 09:48:37.319260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.361 [2024-11-27 09:48:37.319286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:36.361 [2024-11-27 09:48:37.319299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.361 [2024-11-27 09:48:37.319850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.361 [2024-11-27 09:48:37.319879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:36.361 [2024-11-27 09:48:37.319991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:36.361 [2024-11-27 09:48:37.320037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:36.361 [2024-11-27 09:48:37.320186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:36.361 [2024-11-27 09:48:37.320203] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.361 [2024-11-27 09:48:37.320489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:36.361 [2024-11-27 09:48:37.320669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:36.362 [2024-11-27 09:48:37.320686] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:36.362 [2024-11-27 09:48:37.320852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.362 pt3 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.362 "name": "raid_bdev1", 00:11:36.362 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:36.362 "strip_size_kb": 0, 00:11:36.362 "state": "online", 00:11:36.362 "raid_level": "raid1", 00:11:36.362 "superblock": true, 00:11:36.362 "num_base_bdevs": 3, 00:11:36.362 "num_base_bdevs_discovered": 2, 00:11:36.362 "num_base_bdevs_operational": 2, 00:11:36.362 "base_bdevs_list": [ 00:11:36.362 { 00:11:36.362 "name": null, 00:11:36.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.362 "is_configured": false, 00:11:36.362 "data_offset": 2048, 00:11:36.362 "data_size": 63488 00:11:36.362 }, 00:11:36.362 { 00:11:36.362 "name": "pt2", 00:11:36.362 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.362 "is_configured": true, 00:11:36.362 "data_offset": 2048, 00:11:36.362 "data_size": 63488 00:11:36.362 }, 00:11:36.362 { 00:11:36.362 "name": "pt3", 00:11:36.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.362 "is_configured": true, 00:11:36.362 "data_offset": 2048, 00:11:36.362 "data_size": 63488 00:11:36.362 } 00:11:36.362 ] 00:11:36.362 }' 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.362 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 [2024-11-27 09:48:37.798304] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.931 [2024-11-27 09:48:37.798345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.931 [2024-11-27 09:48:37.798451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.931 [2024-11-27 09:48:37.798523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.931 [2024-11-27 09:48:37.798534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 [2024-11-27 09:48:37.854205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:36.931 [2024-11-27 09:48:37.854270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.931 [2024-11-27 09:48:37.854292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:36.931 [2024-11-27 09:48:37.854302] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.931 [2024-11-27 09:48:37.856956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.931 [2024-11-27 09:48:37.856991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:36.931 [2024-11-27 09:48:37.857097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:36.931 [2024-11-27 09:48:37.857160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:36.931 [2024-11-27 09:48:37.857303] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:36.931 [2024-11-27 09:48:37.857318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:36.931 [2024-11-27 09:48:37.857337] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:36.931 [2024-11-27 09:48:37.857401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:36.931 pt1 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.931 "name": "raid_bdev1", 00:11:36.931 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:36.931 "strip_size_kb": 0, 00:11:36.931 "state": "configuring", 00:11:36.931 "raid_level": "raid1", 00:11:36.931 "superblock": true, 00:11:36.931 "num_base_bdevs": 3, 00:11:36.931 "num_base_bdevs_discovered": 1, 00:11:36.931 "num_base_bdevs_operational": 2, 00:11:36.931 "base_bdevs_list": [ 00:11:36.931 { 00:11:36.931 "name": null, 00:11:36.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.931 "is_configured": false, 00:11:36.931 "data_offset": 2048, 00:11:36.931 "data_size": 63488 00:11:36.931 }, 00:11:36.931 { 00:11:36.931 "name": "pt2", 00:11:36.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.931 "is_configured": true, 00:11:36.931 "data_offset": 2048, 00:11:36.931 "data_size": 63488 00:11:36.931 }, 00:11:36.931 { 00:11:36.931 "name": null, 00:11:36.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.931 "is_configured": false, 00:11:36.931 "data_offset": 2048, 00:11:36.931 "data_size": 63488 00:11:36.931 } 00:11:36.931 ] 00:11:36.931 }' 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.931 09:48:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 [2024-11-27 09:48:38.313448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.191 [2024-11-27 09:48:38.313546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.191 [2024-11-27 09:48:38.313572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:37.191 [2024-11-27 09:48:38.313582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.191 [2024-11-27 09:48:38.314177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.191 [2024-11-27 09:48:38.314196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.191 [2024-11-27 09:48:38.314302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:37.191 [2024-11-27 09:48:38.314326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.191 [2024-11-27 09:48:38.314473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:37.191 [2024-11-27 09:48:38.314483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.191 [2024-11-27 09:48:38.314757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:37.191 [2024-11-27 09:48:38.314912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:37.191 [2024-11-27 09:48:38.314929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:37.191 [2024-11-27 09:48:38.315112] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.191 pt3 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.191 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.451 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.451 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.451 "name": "raid_bdev1", 00:11:37.451 "uuid": "8afa455d-5a5d-4bbd-9610-42a839683b04", 00:11:37.451 "strip_size_kb": 0, 00:11:37.451 "state": "online", 00:11:37.451 "raid_level": "raid1", 00:11:37.451 "superblock": true, 00:11:37.451 "num_base_bdevs": 3, 00:11:37.451 "num_base_bdevs_discovered": 2, 00:11:37.451 "num_base_bdevs_operational": 2, 00:11:37.451 "base_bdevs_list": [ 00:11:37.451 { 00:11:37.451 "name": null, 00:11:37.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.451 "is_configured": false, 00:11:37.451 "data_offset": 2048, 00:11:37.451 "data_size": 63488 00:11:37.451 }, 00:11:37.451 { 00:11:37.451 "name": "pt2", 00:11:37.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.451 "is_configured": true, 00:11:37.451 "data_offset": 2048, 00:11:37.451 "data_size": 63488 00:11:37.451 }, 00:11:37.451 { 00:11:37.451 "name": "pt3", 00:11:37.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.451 "is_configured": true, 00:11:37.451 "data_offset": 2048, 00:11:37.451 "data_size": 63488 00:11:37.451 } 00:11:37.451 ] 00:11:37.451 }' 00:11:37.451 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.451 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:37.710 [2024-11-27 09:48:38.792906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8afa455d-5a5d-4bbd-9610-42a839683b04 '!=' 8afa455d-5a5d-4bbd-9610-42a839683b04 ']' 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68911 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68911 ']' 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68911 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.710 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68911 00:11:37.969 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.969 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.969 killing process with pid 68911 00:11:37.969 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68911' 00:11:37.970 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68911 00:11:37.970 [2024-11-27 09:48:38.862323] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.970 09:48:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68911 00:11:37.970 [2024-11-27 09:48:38.862464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.970 [2024-11-27 09:48:38.862546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.970 [2024-11-27 09:48:38.862564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:38.228 [2024-11-27 09:48:39.199914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.607 09:48:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:39.607 00:11:39.607 real 0m7.641s 00:11:39.607 user 0m11.710s 00:11:39.607 sys 0m1.422s 00:11:39.607 09:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.607 09:48:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.607 ************************************ 00:11:39.607 END TEST raid_superblock_test 00:11:39.607 ************************************ 00:11:39.607 09:48:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:39.607 09:48:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:39.607 09:48:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.607 09:48:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.607 ************************************ 00:11:39.607 START TEST raid_read_error_test 00:11:39.607 ************************************ 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FQbTCgMmLL 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69351 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69351 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69351 ']' 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.607 09:48:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:39.607 [2024-11-27 09:48:40.605012] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:39.607 [2024-11-27 09:48:40.605156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69351 ] 00:11:39.866 [2024-11-27 09:48:40.783680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.866 [2024-11-27 09:48:40.924174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.126 [2024-11-27 09:48:41.160953] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.126 [2024-11-27 09:48:41.161049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.385 BaseBdev1_malloc 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.385 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.385 true 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.386 [2024-11-27 09:48:41.500878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:40.386 [2024-11-27 09:48:41.500958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.386 [2024-11-27 09:48:41.500983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:40.386 [2024-11-27 09:48:41.500996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.386 [2024-11-27 09:48:41.503552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.386 [2024-11-27 09:48:41.503591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:40.386 BaseBdev1 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.386 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.645 BaseBdev2_malloc 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.645 true 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.645 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.645 [2024-11-27 09:48:41.568424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:40.645 [2024-11-27 09:48:41.568486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.645 [2024-11-27 09:48:41.568506] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:40.645 [2024-11-27 09:48:41.568518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.645 [2024-11-27 09:48:41.571022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.645 [2024-11-27 09:48:41.571059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:40.645 BaseBdev2 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.646 BaseBdev3_malloc 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.646 true 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.646 [2024-11-27 09:48:41.644146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:40.646 [2024-11-27 09:48:41.644217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.646 [2024-11-27 09:48:41.644238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:40.646 [2024-11-27 09:48:41.644250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.646 [2024-11-27 09:48:41.646702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.646 [2024-11-27 09:48:41.646740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:40.646 BaseBdev3 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.646 [2024-11-27 09:48:41.656238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:40.646 [2024-11-27 09:48:41.658399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:40.646 [2024-11-27 09:48:41.658482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.646 [2024-11-27 09:48:41.658695] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:40.646 [2024-11-27 09:48:41.658715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.646 [2024-11-27 09:48:41.658989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:40.646 [2024-11-27 09:48:41.659221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:40.646 [2024-11-27 09:48:41.659243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:40.646 [2024-11-27 09:48:41.659418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.646 "name": "raid_bdev1", 00:11:40.646 "uuid": "6ca97a9c-8a6b-468c-940f-3538de209642", 00:11:40.646 "strip_size_kb": 0, 00:11:40.646 "state": "online", 00:11:40.646 "raid_level": "raid1", 00:11:40.646 "superblock": true, 00:11:40.646 "num_base_bdevs": 3, 00:11:40.646 "num_base_bdevs_discovered": 3, 00:11:40.646 "num_base_bdevs_operational": 3, 00:11:40.646 "base_bdevs_list": [ 00:11:40.646 { 00:11:40.646 "name": "BaseBdev1", 00:11:40.646 "uuid": "be53fafe-6027-54bc-84ba-4192e4f0bbd6", 00:11:40.646 "is_configured": true, 00:11:40.646 "data_offset": 2048, 00:11:40.646 "data_size": 63488 00:11:40.646 }, 00:11:40.646 { 00:11:40.646 "name": "BaseBdev2", 00:11:40.646 "uuid": "e3540204-4e0e-5d44-b608-3ed905bb96ca", 00:11:40.646 "is_configured": true, 00:11:40.646 "data_offset": 2048, 00:11:40.646 "data_size": 63488 00:11:40.646 }, 00:11:40.646 { 00:11:40.646 "name": "BaseBdev3", 00:11:40.646 "uuid": "505d274a-e5e4-5cee-8435-3f05e1eb6b04", 00:11:40.646 "is_configured": true, 00:11:40.646 "data_offset": 2048, 00:11:40.646 "data_size": 63488 00:11:40.646 } 00:11:40.646 ] 00:11:40.646 }' 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.646 09:48:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.906 09:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:40.906 09:48:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:41.165 [2024-11-27 09:48:42.080885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.105 "name": "raid_bdev1", 00:11:42.105 "uuid": "6ca97a9c-8a6b-468c-940f-3538de209642", 00:11:42.105 "strip_size_kb": 0, 00:11:42.105 "state": "online", 00:11:42.105 "raid_level": "raid1", 00:11:42.105 "superblock": true, 00:11:42.105 "num_base_bdevs": 3, 00:11:42.105 "num_base_bdevs_discovered": 3, 00:11:42.105 "num_base_bdevs_operational": 3, 00:11:42.105 "base_bdevs_list": [ 00:11:42.105 { 00:11:42.105 "name": "BaseBdev1", 00:11:42.105 "uuid": "be53fafe-6027-54bc-84ba-4192e4f0bbd6", 00:11:42.105 "is_configured": true, 00:11:42.105 "data_offset": 2048, 00:11:42.105 "data_size": 63488 00:11:42.105 }, 00:11:42.105 { 00:11:42.105 "name": "BaseBdev2", 00:11:42.105 "uuid": "e3540204-4e0e-5d44-b608-3ed905bb96ca", 00:11:42.105 "is_configured": true, 00:11:42.105 "data_offset": 2048, 00:11:42.105 "data_size": 63488 00:11:42.105 }, 00:11:42.105 { 00:11:42.105 "name": "BaseBdev3", 00:11:42.105 "uuid": "505d274a-e5e4-5cee-8435-3f05e1eb6b04", 00:11:42.105 "is_configured": true, 00:11:42.105 "data_offset": 2048, 00:11:42.105 "data_size": 63488 00:11:42.105 } 00:11:42.105 ] 00:11:42.105 }' 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.105 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.366 [2024-11-27 09:48:43.461586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.366 [2024-11-27 09:48:43.461628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.366 [2024-11-27 09:48:43.464559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.366 [2024-11-27 09:48:43.464622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.366 [2024-11-27 09:48:43.464741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.366 [2024-11-27 09:48:43.464757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69351 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69351 ']' 00:11:42.366 { 00:11:42.366 "results": [ 00:11:42.366 { 00:11:42.366 "job": "raid_bdev1", 00:11:42.366 "core_mask": "0x1", 00:11:42.366 "workload": "randrw", 00:11:42.366 "percentage": 50, 00:11:42.366 "status": "finished", 00:11:42.366 "queue_depth": 1, 00:11:42.366 "io_size": 131072, 00:11:42.366 "runtime": 1.381392, 00:11:42.366 "iops": 9418.036299616619, 00:11:42.366 "mibps": 1177.2545374520773, 00:11:42.366 "io_failed": 0, 00:11:42.366 "io_timeout": 0, 00:11:42.366 "avg_latency_us": 103.42779709259588, 00:11:42.366 "min_latency_us": 24.258515283842794, 00:11:42.366 "max_latency_us": 1559.6995633187773 00:11:42.366 } 00:11:42.366 ], 00:11:42.366 "core_count": 1 00:11:42.366 } 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69351 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.366 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69351 00:11:42.626 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:42.626 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:42.626 killing process with pid 69351 00:11:42.626 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69351' 00:11:42.626 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69351 00:11:42.626 09:48:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69351 00:11:42.626 [2024-11-27 09:48:43.499568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:42.886 [2024-11-27 09:48:43.766489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FQbTCgMmLL 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:44.267 00:11:44.267 real 0m4.620s 00:11:44.267 user 0m5.221s 00:11:44.267 sys 0m0.668s 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.267 09:48:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.267 ************************************ 00:11:44.267 END TEST raid_read_error_test 00:11:44.267 ************************************ 00:11:44.267 09:48:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:44.267 09:48:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.267 09:48:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.267 09:48:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.267 ************************************ 00:11:44.267 START TEST raid_write_error_test 00:11:44.267 ************************************ 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cD8yKR2nn9 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69497 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69497 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69497 ']' 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:44.267 09:48:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.267 [2024-11-27 09:48:45.289956] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:44.268 [2024-11-27 09:48:45.290110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69497 ] 00:11:44.527 [2024-11-27 09:48:45.470608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.527 [2024-11-27 09:48:45.615651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.787 [2024-11-27 09:48:45.858527] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.787 [2024-11-27 09:48:45.858583] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.047 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.047 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:45.047 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.047 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.047 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.047 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 BaseBdev1_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 true 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 [2024-11-27 09:48:46.199764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:45.307 [2024-11-27 09:48:46.199826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.307 [2024-11-27 09:48:46.199849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:45.307 [2024-11-27 09:48:46.199861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.307 [2024-11-27 09:48:46.202369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.307 [2024-11-27 09:48:46.202406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.307 BaseBdev1 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 BaseBdev2_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 true 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 [2024-11-27 09:48:46.266647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:45.307 [2024-11-27 09:48:46.266708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.307 [2024-11-27 09:48:46.266726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:45.307 [2024-11-27 09:48:46.266738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.307 [2024-11-27 09:48:46.269272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.307 [2024-11-27 09:48:46.269309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.307 BaseBdev2 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 BaseBdev3_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 true 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 [2024-11-27 09:48:46.349424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:45.307 [2024-11-27 09:48:46.349484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.307 [2024-11-27 09:48:46.349504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:45.307 [2024-11-27 09:48:46.349516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.307 [2024-11-27 09:48:46.352003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.307 [2024-11-27 09:48:46.352055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.307 BaseBdev3 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.307 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.307 [2024-11-27 09:48:46.357479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.307 [2024-11-27 09:48:46.359693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:45.307 [2024-11-27 09:48:46.359777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:45.307 [2024-11-27 09:48:46.360028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:45.307 [2024-11-27 09:48:46.360047] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.307 [2024-11-27 09:48:46.360341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:45.307 [2024-11-27 09:48:46.360548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:45.307 [2024-11-27 09:48:46.360569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:45.308 [2024-11-27 09:48:46.360735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.308 "name": "raid_bdev1", 00:11:45.308 "uuid": "ebccc353-200e-4085-a058-2981a6b99569", 00:11:45.308 "strip_size_kb": 0, 00:11:45.308 "state": "online", 00:11:45.308 "raid_level": "raid1", 00:11:45.308 "superblock": true, 00:11:45.308 "num_base_bdevs": 3, 00:11:45.308 "num_base_bdevs_discovered": 3, 00:11:45.308 "num_base_bdevs_operational": 3, 00:11:45.308 "base_bdevs_list": [ 00:11:45.308 { 00:11:45.308 "name": "BaseBdev1", 00:11:45.308 "uuid": "dda16972-a2ab-57e6-ab2a-3bbdd7b2a5cc", 00:11:45.308 "is_configured": true, 00:11:45.308 "data_offset": 2048, 00:11:45.308 "data_size": 63488 00:11:45.308 }, 00:11:45.308 { 00:11:45.308 "name": "BaseBdev2", 00:11:45.308 "uuid": "3b5b8ccc-514c-5b16-82e6-8d51e046898f", 00:11:45.308 "is_configured": true, 00:11:45.308 "data_offset": 2048, 00:11:45.308 "data_size": 63488 00:11:45.308 }, 00:11:45.308 { 00:11:45.308 "name": "BaseBdev3", 00:11:45.308 "uuid": "f8373e9e-358b-560a-850e-6a8a07738e5f", 00:11:45.308 "is_configured": true, 00:11:45.308 "data_offset": 2048, 00:11:45.308 "data_size": 63488 00:11:45.308 } 00:11:45.308 ] 00:11:45.308 }' 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.308 09:48:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.877 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:45.877 09:48:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:45.877 [2024-11-27 09:48:46.942055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.817 [2024-11-27 09:48:47.847127] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:46.817 [2024-11-27 09:48:47.847184] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.817 [2024-11-27 09:48:47.847424] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.817 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.818 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.818 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.818 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.818 "name": "raid_bdev1", 00:11:46.818 "uuid": "ebccc353-200e-4085-a058-2981a6b99569", 00:11:46.818 "strip_size_kb": 0, 00:11:46.818 "state": "online", 00:11:46.818 "raid_level": "raid1", 00:11:46.818 "superblock": true, 00:11:46.818 "num_base_bdevs": 3, 00:11:46.818 "num_base_bdevs_discovered": 2, 00:11:46.818 "num_base_bdevs_operational": 2, 00:11:46.818 "base_bdevs_list": [ 00:11:46.818 { 00:11:46.818 "name": null, 00:11:46.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.818 "is_configured": false, 00:11:46.818 "data_offset": 0, 00:11:46.818 "data_size": 63488 00:11:46.818 }, 00:11:46.818 { 00:11:46.818 "name": "BaseBdev2", 00:11:46.818 "uuid": "3b5b8ccc-514c-5b16-82e6-8d51e046898f", 00:11:46.818 "is_configured": true, 00:11:46.818 "data_offset": 2048, 00:11:46.818 "data_size": 63488 00:11:46.818 }, 00:11:46.818 { 00:11:46.818 "name": "BaseBdev3", 00:11:46.818 "uuid": "f8373e9e-358b-560a-850e-6a8a07738e5f", 00:11:46.818 "is_configured": true, 00:11:46.818 "data_offset": 2048, 00:11:46.818 "data_size": 63488 00:11:46.818 } 00:11:46.818 ] 00:11:46.818 }' 00:11:46.818 09:48:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.818 09:48:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.389 [2024-11-27 09:48:48.263655] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.389 [2024-11-27 09:48:48.263701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.389 [2024-11-27 09:48:48.266744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.389 [2024-11-27 09:48:48.266823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.389 [2024-11-27 09:48:48.266916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.389 [2024-11-27 09:48:48.266935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69497 00:11:47.389 { 00:11:47.389 "results": [ 00:11:47.389 { 00:11:47.389 "job": "raid_bdev1", 00:11:47.389 "core_mask": "0x1", 00:11:47.389 "workload": "randrw", 00:11:47.389 "percentage": 50, 00:11:47.389 "status": "finished", 00:11:47.389 "queue_depth": 1, 00:11:47.389 "io_size": 131072, 00:11:47.389 "runtime": 1.321987, 00:11:47.389 "iops": 10532.630048555697, 00:11:47.389 "mibps": 1316.578756069462, 00:11:47.389 "io_failed": 0, 00:11:47.389 "io_timeout": 0, 00:11:47.389 "avg_latency_us": 92.10276723673994, 00:11:47.389 "min_latency_us": 24.258515283842794, 00:11:47.389 "max_latency_us": 1531.0812227074236 00:11:47.389 } 00:11:47.389 ], 00:11:47.389 "core_count": 1 00:11:47.389 } 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69497 ']' 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69497 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69497 00:11:47.389 killing process with pid 69497 00:11:47.389 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.390 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.390 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69497' 00:11:47.390 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69497 00:11:47.390 09:48:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69497 00:11:47.390 [2024-11-27 09:48:48.298195] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.650 [2024-11-27 09:48:48.556831] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cD8yKR2nn9 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:49.045 00:11:49.045 real 0m4.699s 00:11:49.045 user 0m5.438s 00:11:49.045 sys 0m0.684s 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.045 09:48:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 ************************************ 00:11:49.045 END TEST raid_write_error_test 00:11:49.045 ************************************ 00:11:49.045 09:48:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:49.045 09:48:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:49.045 09:48:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:49.045 09:48:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.045 09:48:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.045 09:48:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.045 ************************************ 00:11:49.045 START TEST raid_state_function_test 00:11:49.045 ************************************ 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:49.045 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69635 00:11:49.046 Process raid pid: 69635 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69635' 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69635 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69635 ']' 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.046 09:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.046 [2024-11-27 09:48:50.045522] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:11:49.046 [2024-11-27 09:48:50.045656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.313 [2024-11-27 09:48:50.225100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.313 [2024-11-27 09:48:50.369980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.574 [2024-11-27 09:48:50.615630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.574 [2024-11-27 09:48:50.615693] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.835 [2024-11-27 09:48:50.891574] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.835 [2024-11-27 09:48:50.891627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.835 [2024-11-27 09:48:50.891655] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.835 [2024-11-27 09:48:50.891666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.835 [2024-11-27 09:48:50.891673] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.835 [2024-11-27 09:48:50.891683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.835 [2024-11-27 09:48:50.891689] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.835 [2024-11-27 09:48:50.891699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.835 "name": "Existed_Raid", 00:11:49.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.835 "strip_size_kb": 64, 00:11:49.835 "state": "configuring", 00:11:49.835 "raid_level": "raid0", 00:11:49.835 "superblock": false, 00:11:49.835 "num_base_bdevs": 4, 00:11:49.835 "num_base_bdevs_discovered": 0, 00:11:49.835 "num_base_bdevs_operational": 4, 00:11:49.835 "base_bdevs_list": [ 00:11:49.835 { 00:11:49.835 "name": "BaseBdev1", 00:11:49.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.835 "is_configured": false, 00:11:49.835 "data_offset": 0, 00:11:49.835 "data_size": 0 00:11:49.835 }, 00:11:49.835 { 00:11:49.835 "name": "BaseBdev2", 00:11:49.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.835 "is_configured": false, 00:11:49.835 "data_offset": 0, 00:11:49.835 "data_size": 0 00:11:49.835 }, 00:11:49.835 { 00:11:49.835 "name": "BaseBdev3", 00:11:49.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.835 "is_configured": false, 00:11:49.835 "data_offset": 0, 00:11:49.835 "data_size": 0 00:11:49.835 }, 00:11:49.835 { 00:11:49.835 "name": "BaseBdev4", 00:11:49.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.835 "is_configured": false, 00:11:49.835 "data_offset": 0, 00:11:49.835 "data_size": 0 00:11:49.835 } 00:11:49.835 ] 00:11:49.835 }' 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.835 09:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.407 [2024-11-27 09:48:51.382664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.407 [2024-11-27 09:48:51.382717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.407 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.407 [2024-11-27 09:48:51.390644] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:50.407 [2024-11-27 09:48:51.390693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:50.407 [2024-11-27 09:48:51.390702] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.407 [2024-11-27 09:48:51.390712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.407 [2024-11-27 09:48:51.390719] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.408 [2024-11-27 09:48:51.390729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.408 [2024-11-27 09:48:51.390735] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.408 [2024-11-27 09:48:51.390744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.408 [2024-11-27 09:48:51.442226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.408 BaseBdev1 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.408 [ 00:11:50.408 { 00:11:50.408 "name": "BaseBdev1", 00:11:50.408 "aliases": [ 00:11:50.408 "5008750b-878e-4d75-bd63-cd760d7318ed" 00:11:50.408 ], 00:11:50.408 "product_name": "Malloc disk", 00:11:50.408 "block_size": 512, 00:11:50.408 "num_blocks": 65536, 00:11:50.408 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:50.408 "assigned_rate_limits": { 00:11:50.408 "rw_ios_per_sec": 0, 00:11:50.408 "rw_mbytes_per_sec": 0, 00:11:50.408 "r_mbytes_per_sec": 0, 00:11:50.408 "w_mbytes_per_sec": 0 00:11:50.408 }, 00:11:50.408 "claimed": true, 00:11:50.408 "claim_type": "exclusive_write", 00:11:50.408 "zoned": false, 00:11:50.408 "supported_io_types": { 00:11:50.408 "read": true, 00:11:50.408 "write": true, 00:11:50.408 "unmap": true, 00:11:50.408 "flush": true, 00:11:50.408 "reset": true, 00:11:50.408 "nvme_admin": false, 00:11:50.408 "nvme_io": false, 00:11:50.408 "nvme_io_md": false, 00:11:50.408 "write_zeroes": true, 00:11:50.408 "zcopy": true, 00:11:50.408 "get_zone_info": false, 00:11:50.408 "zone_management": false, 00:11:50.408 "zone_append": false, 00:11:50.408 "compare": false, 00:11:50.408 "compare_and_write": false, 00:11:50.408 "abort": true, 00:11:50.408 "seek_hole": false, 00:11:50.408 "seek_data": false, 00:11:50.408 "copy": true, 00:11:50.408 "nvme_iov_md": false 00:11:50.408 }, 00:11:50.408 "memory_domains": [ 00:11:50.408 { 00:11:50.408 "dma_device_id": "system", 00:11:50.408 "dma_device_type": 1 00:11:50.408 }, 00:11:50.408 { 00:11:50.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.408 "dma_device_type": 2 00:11:50.408 } 00:11:50.408 ], 00:11:50.408 "driver_specific": {} 00:11:50.408 } 00:11:50.408 ] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.408 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.408 "name": "Existed_Raid", 00:11:50.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.408 "strip_size_kb": 64, 00:11:50.408 "state": "configuring", 00:11:50.408 "raid_level": "raid0", 00:11:50.408 "superblock": false, 00:11:50.408 "num_base_bdevs": 4, 00:11:50.408 "num_base_bdevs_discovered": 1, 00:11:50.408 "num_base_bdevs_operational": 4, 00:11:50.408 "base_bdevs_list": [ 00:11:50.408 { 00:11:50.408 "name": "BaseBdev1", 00:11:50.408 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:50.408 "is_configured": true, 00:11:50.408 "data_offset": 0, 00:11:50.408 "data_size": 65536 00:11:50.408 }, 00:11:50.408 { 00:11:50.408 "name": "BaseBdev2", 00:11:50.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.408 "is_configured": false, 00:11:50.408 "data_offset": 0, 00:11:50.408 "data_size": 0 00:11:50.408 }, 00:11:50.408 { 00:11:50.408 "name": "BaseBdev3", 00:11:50.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.408 "is_configured": false, 00:11:50.408 "data_offset": 0, 00:11:50.408 "data_size": 0 00:11:50.408 }, 00:11:50.408 { 00:11:50.408 "name": "BaseBdev4", 00:11:50.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.409 "is_configured": false, 00:11:50.409 "data_offset": 0, 00:11:50.409 "data_size": 0 00:11:50.409 } 00:11:50.409 ] 00:11:50.409 }' 00:11:50.409 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.409 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.980 [2024-11-27 09:48:51.917478] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.980 [2024-11-27 09:48:51.917611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.980 [2024-11-27 09:48:51.925548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.980 [2024-11-27 09:48:51.927878] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.980 [2024-11-27 09:48:51.927980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.980 [2024-11-27 09:48:51.928030] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.980 [2024-11-27 09:48:51.928058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.980 [2024-11-27 09:48:51.928078] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.980 [2024-11-27 09:48:51.928100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.980 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.981 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.981 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.981 "name": "Existed_Raid", 00:11:50.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.981 "strip_size_kb": 64, 00:11:50.981 "state": "configuring", 00:11:50.981 "raid_level": "raid0", 00:11:50.981 "superblock": false, 00:11:50.981 "num_base_bdevs": 4, 00:11:50.981 "num_base_bdevs_discovered": 1, 00:11:50.981 "num_base_bdevs_operational": 4, 00:11:50.981 "base_bdevs_list": [ 00:11:50.981 { 00:11:50.981 "name": "BaseBdev1", 00:11:50.981 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:50.981 "is_configured": true, 00:11:50.981 "data_offset": 0, 00:11:50.981 "data_size": 65536 00:11:50.981 }, 00:11:50.981 { 00:11:50.981 "name": "BaseBdev2", 00:11:50.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.981 "is_configured": false, 00:11:50.981 "data_offset": 0, 00:11:50.981 "data_size": 0 00:11:50.981 }, 00:11:50.981 { 00:11:50.981 "name": "BaseBdev3", 00:11:50.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.981 "is_configured": false, 00:11:50.981 "data_offset": 0, 00:11:50.981 "data_size": 0 00:11:50.981 }, 00:11:50.981 { 00:11:50.981 "name": "BaseBdev4", 00:11:50.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.981 "is_configured": false, 00:11:50.981 "data_offset": 0, 00:11:50.981 "data_size": 0 00:11:50.981 } 00:11:50.981 ] 00:11:50.981 }' 00:11:50.981 09:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.981 09:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.553 [2024-11-27 09:48:52.419623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.553 BaseBdev2 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.553 [ 00:11:51.553 { 00:11:51.553 "name": "BaseBdev2", 00:11:51.553 "aliases": [ 00:11:51.553 "c5f5554b-5432-4a1d-a8e9-8807289a21c8" 00:11:51.553 ], 00:11:51.553 "product_name": "Malloc disk", 00:11:51.553 "block_size": 512, 00:11:51.553 "num_blocks": 65536, 00:11:51.553 "uuid": "c5f5554b-5432-4a1d-a8e9-8807289a21c8", 00:11:51.553 "assigned_rate_limits": { 00:11:51.553 "rw_ios_per_sec": 0, 00:11:51.553 "rw_mbytes_per_sec": 0, 00:11:51.553 "r_mbytes_per_sec": 0, 00:11:51.553 "w_mbytes_per_sec": 0 00:11:51.553 }, 00:11:51.553 "claimed": true, 00:11:51.553 "claim_type": "exclusive_write", 00:11:51.553 "zoned": false, 00:11:51.553 "supported_io_types": { 00:11:51.553 "read": true, 00:11:51.553 "write": true, 00:11:51.553 "unmap": true, 00:11:51.553 "flush": true, 00:11:51.553 "reset": true, 00:11:51.553 "nvme_admin": false, 00:11:51.553 "nvme_io": false, 00:11:51.553 "nvme_io_md": false, 00:11:51.553 "write_zeroes": true, 00:11:51.553 "zcopy": true, 00:11:51.553 "get_zone_info": false, 00:11:51.553 "zone_management": false, 00:11:51.553 "zone_append": false, 00:11:51.553 "compare": false, 00:11:51.553 "compare_and_write": false, 00:11:51.553 "abort": true, 00:11:51.553 "seek_hole": false, 00:11:51.553 "seek_data": false, 00:11:51.553 "copy": true, 00:11:51.553 "nvme_iov_md": false 00:11:51.553 }, 00:11:51.553 "memory_domains": [ 00:11:51.553 { 00:11:51.553 "dma_device_id": "system", 00:11:51.553 "dma_device_type": 1 00:11:51.553 }, 00:11:51.553 { 00:11:51.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.553 "dma_device_type": 2 00:11:51.553 } 00:11:51.553 ], 00:11:51.553 "driver_specific": {} 00:11:51.553 } 00:11:51.553 ] 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.553 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.553 "name": "Existed_Raid", 00:11:51.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.553 "strip_size_kb": 64, 00:11:51.553 "state": "configuring", 00:11:51.553 "raid_level": "raid0", 00:11:51.553 "superblock": false, 00:11:51.553 "num_base_bdevs": 4, 00:11:51.553 "num_base_bdevs_discovered": 2, 00:11:51.553 "num_base_bdevs_operational": 4, 00:11:51.553 "base_bdevs_list": [ 00:11:51.553 { 00:11:51.553 "name": "BaseBdev1", 00:11:51.553 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:51.553 "is_configured": true, 00:11:51.553 "data_offset": 0, 00:11:51.553 "data_size": 65536 00:11:51.553 }, 00:11:51.553 { 00:11:51.553 "name": "BaseBdev2", 00:11:51.553 "uuid": "c5f5554b-5432-4a1d-a8e9-8807289a21c8", 00:11:51.553 "is_configured": true, 00:11:51.553 "data_offset": 0, 00:11:51.553 "data_size": 65536 00:11:51.553 }, 00:11:51.553 { 00:11:51.553 "name": "BaseBdev3", 00:11:51.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.553 "is_configured": false, 00:11:51.553 "data_offset": 0, 00:11:51.553 "data_size": 0 00:11:51.553 }, 00:11:51.553 { 00:11:51.553 "name": "BaseBdev4", 00:11:51.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.553 "is_configured": false, 00:11:51.553 "data_offset": 0, 00:11:51.553 "data_size": 0 00:11:51.553 } 00:11:51.554 ] 00:11:51.554 }' 00:11:51.554 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.554 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.814 [2024-11-27 09:48:52.929478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.814 BaseBdev3 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.814 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.078 [ 00:11:52.078 { 00:11:52.078 "name": "BaseBdev3", 00:11:52.078 "aliases": [ 00:11:52.078 "d41df974-459f-4677-a1a5-8f834843f316" 00:11:52.078 ], 00:11:52.078 "product_name": "Malloc disk", 00:11:52.078 "block_size": 512, 00:11:52.078 "num_blocks": 65536, 00:11:52.078 "uuid": "d41df974-459f-4677-a1a5-8f834843f316", 00:11:52.078 "assigned_rate_limits": { 00:11:52.078 "rw_ios_per_sec": 0, 00:11:52.078 "rw_mbytes_per_sec": 0, 00:11:52.078 "r_mbytes_per_sec": 0, 00:11:52.078 "w_mbytes_per_sec": 0 00:11:52.078 }, 00:11:52.078 "claimed": true, 00:11:52.078 "claim_type": "exclusive_write", 00:11:52.078 "zoned": false, 00:11:52.078 "supported_io_types": { 00:11:52.078 "read": true, 00:11:52.078 "write": true, 00:11:52.078 "unmap": true, 00:11:52.078 "flush": true, 00:11:52.078 "reset": true, 00:11:52.078 "nvme_admin": false, 00:11:52.078 "nvme_io": false, 00:11:52.078 "nvme_io_md": false, 00:11:52.078 "write_zeroes": true, 00:11:52.078 "zcopy": true, 00:11:52.078 "get_zone_info": false, 00:11:52.078 "zone_management": false, 00:11:52.078 "zone_append": false, 00:11:52.078 "compare": false, 00:11:52.078 "compare_and_write": false, 00:11:52.078 "abort": true, 00:11:52.078 "seek_hole": false, 00:11:52.078 "seek_data": false, 00:11:52.078 "copy": true, 00:11:52.078 "nvme_iov_md": false 00:11:52.078 }, 00:11:52.078 "memory_domains": [ 00:11:52.078 { 00:11:52.078 "dma_device_id": "system", 00:11:52.078 "dma_device_type": 1 00:11:52.078 }, 00:11:52.078 { 00:11:52.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.078 "dma_device_type": 2 00:11:52.078 } 00:11:52.078 ], 00:11:52.078 "driver_specific": {} 00:11:52.078 } 00:11:52.078 ] 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.078 09:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.078 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.078 "name": "Existed_Raid", 00:11:52.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.078 "strip_size_kb": 64, 00:11:52.078 "state": "configuring", 00:11:52.078 "raid_level": "raid0", 00:11:52.078 "superblock": false, 00:11:52.078 "num_base_bdevs": 4, 00:11:52.078 "num_base_bdevs_discovered": 3, 00:11:52.078 "num_base_bdevs_operational": 4, 00:11:52.078 "base_bdevs_list": [ 00:11:52.078 { 00:11:52.078 "name": "BaseBdev1", 00:11:52.078 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:52.078 "is_configured": true, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 65536 00:11:52.078 }, 00:11:52.078 { 00:11:52.078 "name": "BaseBdev2", 00:11:52.078 "uuid": "c5f5554b-5432-4a1d-a8e9-8807289a21c8", 00:11:52.078 "is_configured": true, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 65536 00:11:52.078 }, 00:11:52.078 { 00:11:52.078 "name": "BaseBdev3", 00:11:52.078 "uuid": "d41df974-459f-4677-a1a5-8f834843f316", 00:11:52.078 "is_configured": true, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 65536 00:11:52.078 }, 00:11:52.078 { 00:11:52.078 "name": "BaseBdev4", 00:11:52.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.078 "is_configured": false, 00:11:52.078 "data_offset": 0, 00:11:52.078 "data_size": 0 00:11:52.078 } 00:11:52.078 ] 00:11:52.078 }' 00:11:52.078 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.078 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.347 [2024-11-27 09:48:53.461725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:52.347 [2024-11-27 09:48:53.461893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:52.347 [2024-11-27 09:48:53.461921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:52.347 [2024-11-27 09:48:53.462279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:52.347 [2024-11-27 09:48:53.462497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:52.347 [2024-11-27 09:48:53.462540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:52.347 [2024-11-27 09:48:53.462916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.347 BaseBdev4 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.347 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.607 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.607 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:52.607 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.607 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.607 [ 00:11:52.607 { 00:11:52.607 "name": "BaseBdev4", 00:11:52.607 "aliases": [ 00:11:52.607 "d387972f-9c89-4f5a-81a5-ee12ca77e19b" 00:11:52.607 ], 00:11:52.607 "product_name": "Malloc disk", 00:11:52.607 "block_size": 512, 00:11:52.607 "num_blocks": 65536, 00:11:52.607 "uuid": "d387972f-9c89-4f5a-81a5-ee12ca77e19b", 00:11:52.607 "assigned_rate_limits": { 00:11:52.607 "rw_ios_per_sec": 0, 00:11:52.607 "rw_mbytes_per_sec": 0, 00:11:52.607 "r_mbytes_per_sec": 0, 00:11:52.607 "w_mbytes_per_sec": 0 00:11:52.607 }, 00:11:52.607 "claimed": true, 00:11:52.607 "claim_type": "exclusive_write", 00:11:52.607 "zoned": false, 00:11:52.607 "supported_io_types": { 00:11:52.607 "read": true, 00:11:52.607 "write": true, 00:11:52.607 "unmap": true, 00:11:52.607 "flush": true, 00:11:52.607 "reset": true, 00:11:52.607 "nvme_admin": false, 00:11:52.607 "nvme_io": false, 00:11:52.607 "nvme_io_md": false, 00:11:52.607 "write_zeroes": true, 00:11:52.607 "zcopy": true, 00:11:52.607 "get_zone_info": false, 00:11:52.607 "zone_management": false, 00:11:52.607 "zone_append": false, 00:11:52.607 "compare": false, 00:11:52.607 "compare_and_write": false, 00:11:52.607 "abort": true, 00:11:52.607 "seek_hole": false, 00:11:52.608 "seek_data": false, 00:11:52.608 "copy": true, 00:11:52.608 "nvme_iov_md": false 00:11:52.608 }, 00:11:52.608 "memory_domains": [ 00:11:52.608 { 00:11:52.608 "dma_device_id": "system", 00:11:52.608 "dma_device_type": 1 00:11:52.608 }, 00:11:52.608 { 00:11:52.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.608 "dma_device_type": 2 00:11:52.608 } 00:11:52.608 ], 00:11:52.608 "driver_specific": {} 00:11:52.608 } 00:11:52.608 ] 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.608 "name": "Existed_Raid", 00:11:52.608 "uuid": "bb0bf3d2-b373-4876-8398-fa35acbd1888", 00:11:52.608 "strip_size_kb": 64, 00:11:52.608 "state": "online", 00:11:52.608 "raid_level": "raid0", 00:11:52.608 "superblock": false, 00:11:52.608 "num_base_bdevs": 4, 00:11:52.608 "num_base_bdevs_discovered": 4, 00:11:52.608 "num_base_bdevs_operational": 4, 00:11:52.608 "base_bdevs_list": [ 00:11:52.608 { 00:11:52.608 "name": "BaseBdev1", 00:11:52.608 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:52.608 "is_configured": true, 00:11:52.608 "data_offset": 0, 00:11:52.608 "data_size": 65536 00:11:52.608 }, 00:11:52.608 { 00:11:52.608 "name": "BaseBdev2", 00:11:52.608 "uuid": "c5f5554b-5432-4a1d-a8e9-8807289a21c8", 00:11:52.608 "is_configured": true, 00:11:52.608 "data_offset": 0, 00:11:52.608 "data_size": 65536 00:11:52.608 }, 00:11:52.608 { 00:11:52.608 "name": "BaseBdev3", 00:11:52.608 "uuid": "d41df974-459f-4677-a1a5-8f834843f316", 00:11:52.608 "is_configured": true, 00:11:52.608 "data_offset": 0, 00:11:52.608 "data_size": 65536 00:11:52.608 }, 00:11:52.608 { 00:11:52.608 "name": "BaseBdev4", 00:11:52.608 "uuid": "d387972f-9c89-4f5a-81a5-ee12ca77e19b", 00:11:52.608 "is_configured": true, 00:11:52.608 "data_offset": 0, 00:11:52.608 "data_size": 65536 00:11:52.608 } 00:11:52.608 ] 00:11:52.608 }' 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.608 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.868 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.869 [2024-11-27 09:48:53.889480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:52.869 "name": "Existed_Raid", 00:11:52.869 "aliases": [ 00:11:52.869 "bb0bf3d2-b373-4876-8398-fa35acbd1888" 00:11:52.869 ], 00:11:52.869 "product_name": "Raid Volume", 00:11:52.869 "block_size": 512, 00:11:52.869 "num_blocks": 262144, 00:11:52.869 "uuid": "bb0bf3d2-b373-4876-8398-fa35acbd1888", 00:11:52.869 "assigned_rate_limits": { 00:11:52.869 "rw_ios_per_sec": 0, 00:11:52.869 "rw_mbytes_per_sec": 0, 00:11:52.869 "r_mbytes_per_sec": 0, 00:11:52.869 "w_mbytes_per_sec": 0 00:11:52.869 }, 00:11:52.869 "claimed": false, 00:11:52.869 "zoned": false, 00:11:52.869 "supported_io_types": { 00:11:52.869 "read": true, 00:11:52.869 "write": true, 00:11:52.869 "unmap": true, 00:11:52.869 "flush": true, 00:11:52.869 "reset": true, 00:11:52.869 "nvme_admin": false, 00:11:52.869 "nvme_io": false, 00:11:52.869 "nvme_io_md": false, 00:11:52.869 "write_zeroes": true, 00:11:52.869 "zcopy": false, 00:11:52.869 "get_zone_info": false, 00:11:52.869 "zone_management": false, 00:11:52.869 "zone_append": false, 00:11:52.869 "compare": false, 00:11:52.869 "compare_and_write": false, 00:11:52.869 "abort": false, 00:11:52.869 "seek_hole": false, 00:11:52.869 "seek_data": false, 00:11:52.869 "copy": false, 00:11:52.869 "nvme_iov_md": false 00:11:52.869 }, 00:11:52.869 "memory_domains": [ 00:11:52.869 { 00:11:52.869 "dma_device_id": "system", 00:11:52.869 "dma_device_type": 1 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.869 "dma_device_type": 2 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "system", 00:11:52.869 "dma_device_type": 1 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.869 "dma_device_type": 2 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "system", 00:11:52.869 "dma_device_type": 1 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.869 "dma_device_type": 2 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "system", 00:11:52.869 "dma_device_type": 1 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:52.869 "dma_device_type": 2 00:11:52.869 } 00:11:52.869 ], 00:11:52.869 "driver_specific": { 00:11:52.869 "raid": { 00:11:52.869 "uuid": "bb0bf3d2-b373-4876-8398-fa35acbd1888", 00:11:52.869 "strip_size_kb": 64, 00:11:52.869 "state": "online", 00:11:52.869 "raid_level": "raid0", 00:11:52.869 "superblock": false, 00:11:52.869 "num_base_bdevs": 4, 00:11:52.869 "num_base_bdevs_discovered": 4, 00:11:52.869 "num_base_bdevs_operational": 4, 00:11:52.869 "base_bdevs_list": [ 00:11:52.869 { 00:11:52.869 "name": "BaseBdev1", 00:11:52.869 "uuid": "5008750b-878e-4d75-bd63-cd760d7318ed", 00:11:52.869 "is_configured": true, 00:11:52.869 "data_offset": 0, 00:11:52.869 "data_size": 65536 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "name": "BaseBdev2", 00:11:52.869 "uuid": "c5f5554b-5432-4a1d-a8e9-8807289a21c8", 00:11:52.869 "is_configured": true, 00:11:52.869 "data_offset": 0, 00:11:52.869 "data_size": 65536 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "name": "BaseBdev3", 00:11:52.869 "uuid": "d41df974-459f-4677-a1a5-8f834843f316", 00:11:52.869 "is_configured": true, 00:11:52.869 "data_offset": 0, 00:11:52.869 "data_size": 65536 00:11:52.869 }, 00:11:52.869 { 00:11:52.869 "name": "BaseBdev4", 00:11:52.869 "uuid": "d387972f-9c89-4f5a-81a5-ee12ca77e19b", 00:11:52.869 "is_configured": true, 00:11:52.869 "data_offset": 0, 00:11:52.869 "data_size": 65536 00:11:52.869 } 00:11:52.869 ] 00:11:52.869 } 00:11:52.869 } 00:11:52.869 }' 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:52.869 BaseBdev2 00:11:52.869 BaseBdev3 00:11:52.869 BaseBdev4' 00:11:52.869 09:48:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.130 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.131 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.131 [2024-11-27 09:48:54.220642] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.131 [2024-11-27 09:48:54.220683] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.131 [2024-11-27 09:48:54.220748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.392 "name": "Existed_Raid", 00:11:53.392 "uuid": "bb0bf3d2-b373-4876-8398-fa35acbd1888", 00:11:53.392 "strip_size_kb": 64, 00:11:53.392 "state": "offline", 00:11:53.392 "raid_level": "raid0", 00:11:53.392 "superblock": false, 00:11:53.392 "num_base_bdevs": 4, 00:11:53.392 "num_base_bdevs_discovered": 3, 00:11:53.392 "num_base_bdevs_operational": 3, 00:11:53.392 "base_bdevs_list": [ 00:11:53.392 { 00:11:53.392 "name": null, 00:11:53.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.392 "is_configured": false, 00:11:53.392 "data_offset": 0, 00:11:53.392 "data_size": 65536 00:11:53.392 }, 00:11:53.392 { 00:11:53.392 "name": "BaseBdev2", 00:11:53.392 "uuid": "c5f5554b-5432-4a1d-a8e9-8807289a21c8", 00:11:53.392 "is_configured": true, 00:11:53.392 "data_offset": 0, 00:11:53.392 "data_size": 65536 00:11:53.392 }, 00:11:53.392 { 00:11:53.392 "name": "BaseBdev3", 00:11:53.392 "uuid": "d41df974-459f-4677-a1a5-8f834843f316", 00:11:53.392 "is_configured": true, 00:11:53.392 "data_offset": 0, 00:11:53.392 "data_size": 65536 00:11:53.392 }, 00:11:53.392 { 00:11:53.392 "name": "BaseBdev4", 00:11:53.392 "uuid": "d387972f-9c89-4f5a-81a5-ee12ca77e19b", 00:11:53.392 "is_configured": true, 00:11:53.392 "data_offset": 0, 00:11:53.392 "data_size": 65536 00:11:53.392 } 00:11:53.392 ] 00:11:53.392 }' 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.392 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 [2024-11-27 09:48:54.843846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 09:48:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.963 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.963 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.963 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:53.963 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.963 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.963 [2024-11-27 09:48:55.014027] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.223 [2024-11-27 09:48:55.183449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:54.223 [2024-11-27 09:48:55.183518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:54.223 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.224 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 BaseBdev2 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.484 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 [ 00:11:54.485 { 00:11:54.485 "name": "BaseBdev2", 00:11:54.485 "aliases": [ 00:11:54.485 "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0" 00:11:54.485 ], 00:11:54.485 "product_name": "Malloc disk", 00:11:54.485 "block_size": 512, 00:11:54.485 "num_blocks": 65536, 00:11:54.485 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:54.485 "assigned_rate_limits": { 00:11:54.485 "rw_ios_per_sec": 0, 00:11:54.485 "rw_mbytes_per_sec": 0, 00:11:54.485 "r_mbytes_per_sec": 0, 00:11:54.485 "w_mbytes_per_sec": 0 00:11:54.485 }, 00:11:54.485 "claimed": false, 00:11:54.485 "zoned": false, 00:11:54.485 "supported_io_types": { 00:11:54.485 "read": true, 00:11:54.485 "write": true, 00:11:54.485 "unmap": true, 00:11:54.485 "flush": true, 00:11:54.485 "reset": true, 00:11:54.485 "nvme_admin": false, 00:11:54.485 "nvme_io": false, 00:11:54.485 "nvme_io_md": false, 00:11:54.485 "write_zeroes": true, 00:11:54.485 "zcopy": true, 00:11:54.485 "get_zone_info": false, 00:11:54.485 "zone_management": false, 00:11:54.485 "zone_append": false, 00:11:54.485 "compare": false, 00:11:54.485 "compare_and_write": false, 00:11:54.485 "abort": true, 00:11:54.485 "seek_hole": false, 00:11:54.485 "seek_data": false, 00:11:54.485 "copy": true, 00:11:54.485 "nvme_iov_md": false 00:11:54.485 }, 00:11:54.485 "memory_domains": [ 00:11:54.485 { 00:11:54.485 "dma_device_id": "system", 00:11:54.485 "dma_device_type": 1 00:11:54.485 }, 00:11:54.485 { 00:11:54.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.485 "dma_device_type": 2 00:11:54.485 } 00:11:54.485 ], 00:11:54.485 "driver_specific": {} 00:11:54.485 } 00:11:54.485 ] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 BaseBdev3 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 [ 00:11:54.485 { 00:11:54.485 "name": "BaseBdev3", 00:11:54.485 "aliases": [ 00:11:54.485 "7917971f-8794-4e16-82de-78b9f61382c0" 00:11:54.485 ], 00:11:54.485 "product_name": "Malloc disk", 00:11:54.485 "block_size": 512, 00:11:54.485 "num_blocks": 65536, 00:11:54.485 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:54.485 "assigned_rate_limits": { 00:11:54.485 "rw_ios_per_sec": 0, 00:11:54.485 "rw_mbytes_per_sec": 0, 00:11:54.485 "r_mbytes_per_sec": 0, 00:11:54.485 "w_mbytes_per_sec": 0 00:11:54.485 }, 00:11:54.485 "claimed": false, 00:11:54.485 "zoned": false, 00:11:54.485 "supported_io_types": { 00:11:54.485 "read": true, 00:11:54.485 "write": true, 00:11:54.485 "unmap": true, 00:11:54.485 "flush": true, 00:11:54.485 "reset": true, 00:11:54.485 "nvme_admin": false, 00:11:54.485 "nvme_io": false, 00:11:54.485 "nvme_io_md": false, 00:11:54.485 "write_zeroes": true, 00:11:54.485 "zcopy": true, 00:11:54.485 "get_zone_info": false, 00:11:54.485 "zone_management": false, 00:11:54.485 "zone_append": false, 00:11:54.485 "compare": false, 00:11:54.485 "compare_and_write": false, 00:11:54.485 "abort": true, 00:11:54.485 "seek_hole": false, 00:11:54.485 "seek_data": false, 00:11:54.485 "copy": true, 00:11:54.485 "nvme_iov_md": false 00:11:54.485 }, 00:11:54.485 "memory_domains": [ 00:11:54.485 { 00:11:54.485 "dma_device_id": "system", 00:11:54.485 "dma_device_type": 1 00:11:54.485 }, 00:11:54.485 { 00:11:54.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.485 "dma_device_type": 2 00:11:54.485 } 00:11:54.485 ], 00:11:54.485 "driver_specific": {} 00:11:54.485 } 00:11:54.485 ] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 BaseBdev4 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.485 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.485 [ 00:11:54.485 { 00:11:54.486 "name": "BaseBdev4", 00:11:54.486 "aliases": [ 00:11:54.486 "7e2f72ae-dba6-43ba-a2b5-2819edd89480" 00:11:54.486 ], 00:11:54.486 "product_name": "Malloc disk", 00:11:54.486 "block_size": 512, 00:11:54.486 "num_blocks": 65536, 00:11:54.486 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:54.486 "assigned_rate_limits": { 00:11:54.486 "rw_ios_per_sec": 0, 00:11:54.486 "rw_mbytes_per_sec": 0, 00:11:54.486 "r_mbytes_per_sec": 0, 00:11:54.486 "w_mbytes_per_sec": 0 00:11:54.486 }, 00:11:54.486 "claimed": false, 00:11:54.486 "zoned": false, 00:11:54.486 "supported_io_types": { 00:11:54.486 "read": true, 00:11:54.486 "write": true, 00:11:54.486 "unmap": true, 00:11:54.486 "flush": true, 00:11:54.486 "reset": true, 00:11:54.486 "nvme_admin": false, 00:11:54.486 "nvme_io": false, 00:11:54.486 "nvme_io_md": false, 00:11:54.486 "write_zeroes": true, 00:11:54.486 "zcopy": true, 00:11:54.486 "get_zone_info": false, 00:11:54.486 "zone_management": false, 00:11:54.486 "zone_append": false, 00:11:54.486 "compare": false, 00:11:54.486 "compare_and_write": false, 00:11:54.486 "abort": true, 00:11:54.486 "seek_hole": false, 00:11:54.486 "seek_data": false, 00:11:54.486 "copy": true, 00:11:54.486 "nvme_iov_md": false 00:11:54.486 }, 00:11:54.486 "memory_domains": [ 00:11:54.486 { 00:11:54.486 "dma_device_id": "system", 00:11:54.486 "dma_device_type": 1 00:11:54.486 }, 00:11:54.486 { 00:11:54.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.486 "dma_device_type": 2 00:11:54.486 } 00:11:54.486 ], 00:11:54.486 "driver_specific": {} 00:11:54.486 } 00:11:54.486 ] 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.486 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.746 [2024-11-27 09:48:55.618646] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.746 [2024-11-27 09:48:55.618764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.746 [2024-11-27 09:48:55.618837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:54.746 [2024-11-27 09:48:55.621172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:54.746 [2024-11-27 09:48:55.621276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.746 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.746 "name": "Existed_Raid", 00:11:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.747 "strip_size_kb": 64, 00:11:54.747 "state": "configuring", 00:11:54.747 "raid_level": "raid0", 00:11:54.747 "superblock": false, 00:11:54.747 "num_base_bdevs": 4, 00:11:54.747 "num_base_bdevs_discovered": 3, 00:11:54.747 "num_base_bdevs_operational": 4, 00:11:54.747 "base_bdevs_list": [ 00:11:54.747 { 00:11:54.747 "name": "BaseBdev1", 00:11:54.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.747 "is_configured": false, 00:11:54.747 "data_offset": 0, 00:11:54.747 "data_size": 0 00:11:54.747 }, 00:11:54.747 { 00:11:54.747 "name": "BaseBdev2", 00:11:54.747 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:54.747 "is_configured": true, 00:11:54.747 "data_offset": 0, 00:11:54.747 "data_size": 65536 00:11:54.747 }, 00:11:54.747 { 00:11:54.747 "name": "BaseBdev3", 00:11:54.747 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:54.747 "is_configured": true, 00:11:54.747 "data_offset": 0, 00:11:54.747 "data_size": 65536 00:11:54.747 }, 00:11:54.747 { 00:11:54.747 "name": "BaseBdev4", 00:11:54.747 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:54.747 "is_configured": true, 00:11:54.747 "data_offset": 0, 00:11:54.747 "data_size": 65536 00:11:54.747 } 00:11:54.747 ] 00:11:54.747 }' 00:11:54.747 09:48:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.747 09:48:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.007 [2024-11-27 09:48:56.093869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.007 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.267 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.267 "name": "Existed_Raid", 00:11:55.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.267 "strip_size_kb": 64, 00:11:55.267 "state": "configuring", 00:11:55.267 "raid_level": "raid0", 00:11:55.267 "superblock": false, 00:11:55.267 "num_base_bdevs": 4, 00:11:55.267 "num_base_bdevs_discovered": 2, 00:11:55.267 "num_base_bdevs_operational": 4, 00:11:55.267 "base_bdevs_list": [ 00:11:55.267 { 00:11:55.267 "name": "BaseBdev1", 00:11:55.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.267 "is_configured": false, 00:11:55.267 "data_offset": 0, 00:11:55.267 "data_size": 0 00:11:55.267 }, 00:11:55.267 { 00:11:55.267 "name": null, 00:11:55.267 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:55.267 "is_configured": false, 00:11:55.267 "data_offset": 0, 00:11:55.267 "data_size": 65536 00:11:55.267 }, 00:11:55.267 { 00:11:55.267 "name": "BaseBdev3", 00:11:55.267 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:55.267 "is_configured": true, 00:11:55.267 "data_offset": 0, 00:11:55.267 "data_size": 65536 00:11:55.267 }, 00:11:55.267 { 00:11:55.267 "name": "BaseBdev4", 00:11:55.267 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:55.267 "is_configured": true, 00:11:55.267 "data_offset": 0, 00:11:55.267 "data_size": 65536 00:11:55.267 } 00:11:55.267 ] 00:11:55.267 }' 00:11:55.267 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.267 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.527 [2024-11-27 09:48:56.646133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.527 BaseBdev1 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.527 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.528 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.528 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.528 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.788 [ 00:11:55.788 { 00:11:55.788 "name": "BaseBdev1", 00:11:55.788 "aliases": [ 00:11:55.788 "4810a96d-f17e-4489-b3a9-bc3460d17840" 00:11:55.788 ], 00:11:55.788 "product_name": "Malloc disk", 00:11:55.788 "block_size": 512, 00:11:55.788 "num_blocks": 65536, 00:11:55.788 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:55.788 "assigned_rate_limits": { 00:11:55.788 "rw_ios_per_sec": 0, 00:11:55.788 "rw_mbytes_per_sec": 0, 00:11:55.788 "r_mbytes_per_sec": 0, 00:11:55.788 "w_mbytes_per_sec": 0 00:11:55.788 }, 00:11:55.788 "claimed": true, 00:11:55.788 "claim_type": "exclusive_write", 00:11:55.788 "zoned": false, 00:11:55.788 "supported_io_types": { 00:11:55.788 "read": true, 00:11:55.788 "write": true, 00:11:55.788 "unmap": true, 00:11:55.788 "flush": true, 00:11:55.788 "reset": true, 00:11:55.788 "nvme_admin": false, 00:11:55.788 "nvme_io": false, 00:11:55.788 "nvme_io_md": false, 00:11:55.788 "write_zeroes": true, 00:11:55.788 "zcopy": true, 00:11:55.788 "get_zone_info": false, 00:11:55.788 "zone_management": false, 00:11:55.788 "zone_append": false, 00:11:55.788 "compare": false, 00:11:55.788 "compare_and_write": false, 00:11:55.788 "abort": true, 00:11:55.788 "seek_hole": false, 00:11:55.788 "seek_data": false, 00:11:55.788 "copy": true, 00:11:55.788 "nvme_iov_md": false 00:11:55.788 }, 00:11:55.788 "memory_domains": [ 00:11:55.788 { 00:11:55.788 "dma_device_id": "system", 00:11:55.788 "dma_device_type": 1 00:11:55.788 }, 00:11:55.788 { 00:11:55.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.788 "dma_device_type": 2 00:11:55.788 } 00:11:55.788 ], 00:11:55.788 "driver_specific": {} 00:11:55.788 } 00:11:55.788 ] 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.788 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.788 "name": "Existed_Raid", 00:11:55.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.788 "strip_size_kb": 64, 00:11:55.788 "state": "configuring", 00:11:55.788 "raid_level": "raid0", 00:11:55.788 "superblock": false, 00:11:55.788 "num_base_bdevs": 4, 00:11:55.788 "num_base_bdevs_discovered": 3, 00:11:55.788 "num_base_bdevs_operational": 4, 00:11:55.788 "base_bdevs_list": [ 00:11:55.788 { 00:11:55.788 "name": "BaseBdev1", 00:11:55.788 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:55.788 "is_configured": true, 00:11:55.788 "data_offset": 0, 00:11:55.788 "data_size": 65536 00:11:55.788 }, 00:11:55.788 { 00:11:55.788 "name": null, 00:11:55.788 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:55.788 "is_configured": false, 00:11:55.788 "data_offset": 0, 00:11:55.788 "data_size": 65536 00:11:55.788 }, 00:11:55.788 { 00:11:55.788 "name": "BaseBdev3", 00:11:55.788 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:55.788 "is_configured": true, 00:11:55.788 "data_offset": 0, 00:11:55.789 "data_size": 65536 00:11:55.789 }, 00:11:55.789 { 00:11:55.789 "name": "BaseBdev4", 00:11:55.789 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:55.789 "is_configured": true, 00:11:55.789 "data_offset": 0, 00:11:55.789 "data_size": 65536 00:11:55.789 } 00:11:55.789 ] 00:11:55.789 }' 00:11:55.789 09:48:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.789 09:48:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.050 [2024-11-27 09:48:57.157416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.050 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.310 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.310 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.310 "name": "Existed_Raid", 00:11:56.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.310 "strip_size_kb": 64, 00:11:56.310 "state": "configuring", 00:11:56.310 "raid_level": "raid0", 00:11:56.310 "superblock": false, 00:11:56.310 "num_base_bdevs": 4, 00:11:56.310 "num_base_bdevs_discovered": 2, 00:11:56.310 "num_base_bdevs_operational": 4, 00:11:56.310 "base_bdevs_list": [ 00:11:56.310 { 00:11:56.310 "name": "BaseBdev1", 00:11:56.310 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:56.310 "is_configured": true, 00:11:56.310 "data_offset": 0, 00:11:56.310 "data_size": 65536 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "name": null, 00:11:56.310 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:56.310 "is_configured": false, 00:11:56.310 "data_offset": 0, 00:11:56.310 "data_size": 65536 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "name": null, 00:11:56.310 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:56.310 "is_configured": false, 00:11:56.310 "data_offset": 0, 00:11:56.310 "data_size": 65536 00:11:56.310 }, 00:11:56.310 { 00:11:56.310 "name": "BaseBdev4", 00:11:56.310 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:56.310 "is_configured": true, 00:11:56.310 "data_offset": 0, 00:11:56.310 "data_size": 65536 00:11:56.310 } 00:11:56.310 ] 00:11:56.310 }' 00:11:56.310 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.310 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.572 [2024-11-27 09:48:57.632550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.572 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.572 "name": "Existed_Raid", 00:11:56.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.573 "strip_size_kb": 64, 00:11:56.573 "state": "configuring", 00:11:56.573 "raid_level": "raid0", 00:11:56.573 "superblock": false, 00:11:56.573 "num_base_bdevs": 4, 00:11:56.573 "num_base_bdevs_discovered": 3, 00:11:56.573 "num_base_bdevs_operational": 4, 00:11:56.573 "base_bdevs_list": [ 00:11:56.573 { 00:11:56.573 "name": "BaseBdev1", 00:11:56.573 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:56.573 "is_configured": true, 00:11:56.573 "data_offset": 0, 00:11:56.573 "data_size": 65536 00:11:56.573 }, 00:11:56.573 { 00:11:56.573 "name": null, 00:11:56.573 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:56.573 "is_configured": false, 00:11:56.573 "data_offset": 0, 00:11:56.573 "data_size": 65536 00:11:56.573 }, 00:11:56.573 { 00:11:56.573 "name": "BaseBdev3", 00:11:56.573 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:56.573 "is_configured": true, 00:11:56.573 "data_offset": 0, 00:11:56.573 "data_size": 65536 00:11:56.573 }, 00:11:56.573 { 00:11:56.573 "name": "BaseBdev4", 00:11:56.573 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:56.573 "is_configured": true, 00:11:56.573 "data_offset": 0, 00:11:56.573 "data_size": 65536 00:11:56.573 } 00:11:56.573 ] 00:11:56.573 }' 00:11:56.573 09:48:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.573 09:48:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.143 [2024-11-27 09:48:58.135749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.143 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.403 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.403 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.403 "name": "Existed_Raid", 00:11:57.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.403 "strip_size_kb": 64, 00:11:57.403 "state": "configuring", 00:11:57.403 "raid_level": "raid0", 00:11:57.403 "superblock": false, 00:11:57.403 "num_base_bdevs": 4, 00:11:57.403 "num_base_bdevs_discovered": 2, 00:11:57.403 "num_base_bdevs_operational": 4, 00:11:57.403 "base_bdevs_list": [ 00:11:57.403 { 00:11:57.403 "name": null, 00:11:57.403 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:57.403 "is_configured": false, 00:11:57.403 "data_offset": 0, 00:11:57.403 "data_size": 65536 00:11:57.403 }, 00:11:57.403 { 00:11:57.403 "name": null, 00:11:57.403 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:57.403 "is_configured": false, 00:11:57.403 "data_offset": 0, 00:11:57.403 "data_size": 65536 00:11:57.403 }, 00:11:57.403 { 00:11:57.403 "name": "BaseBdev3", 00:11:57.403 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:57.403 "is_configured": true, 00:11:57.403 "data_offset": 0, 00:11:57.403 "data_size": 65536 00:11:57.403 }, 00:11:57.403 { 00:11:57.403 "name": "BaseBdev4", 00:11:57.403 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:57.403 "is_configured": true, 00:11:57.403 "data_offset": 0, 00:11:57.403 "data_size": 65536 00:11:57.403 } 00:11:57.403 ] 00:11:57.403 }' 00:11:57.403 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.403 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.664 [2024-11-27 09:48:58.713212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.664 "name": "Existed_Raid", 00:11:57.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.664 "strip_size_kb": 64, 00:11:57.664 "state": "configuring", 00:11:57.664 "raid_level": "raid0", 00:11:57.664 "superblock": false, 00:11:57.664 "num_base_bdevs": 4, 00:11:57.664 "num_base_bdevs_discovered": 3, 00:11:57.664 "num_base_bdevs_operational": 4, 00:11:57.664 "base_bdevs_list": [ 00:11:57.664 { 00:11:57.664 "name": null, 00:11:57.664 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:57.664 "is_configured": false, 00:11:57.664 "data_offset": 0, 00:11:57.664 "data_size": 65536 00:11:57.664 }, 00:11:57.664 { 00:11:57.664 "name": "BaseBdev2", 00:11:57.664 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:57.664 "is_configured": true, 00:11:57.664 "data_offset": 0, 00:11:57.664 "data_size": 65536 00:11:57.664 }, 00:11:57.664 { 00:11:57.664 "name": "BaseBdev3", 00:11:57.664 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:57.664 "is_configured": true, 00:11:57.664 "data_offset": 0, 00:11:57.664 "data_size": 65536 00:11:57.664 }, 00:11:57.664 { 00:11:57.664 "name": "BaseBdev4", 00:11:57.664 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:57.664 "is_configured": true, 00:11:57.664 "data_offset": 0, 00:11:57.664 "data_size": 65536 00:11:57.664 } 00:11:57.664 ] 00:11:57.664 }' 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.664 09:48:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4810a96d-f17e-4489-b3a9-bc3460d17840 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 [2024-11-27 09:48:59.273055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:58.235 [2024-11-27 09:48:59.273115] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:58.235 [2024-11-27 09:48:59.273123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:58.235 [2024-11-27 09:48:59.273435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:58.235 [2024-11-27 09:48:59.273602] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:58.235 [2024-11-27 09:48:59.273616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:58.235 [2024-11-27 09:48:59.273940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.235 NewBaseBdev 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 [ 00:11:58.235 { 00:11:58.235 "name": "NewBaseBdev", 00:11:58.235 "aliases": [ 00:11:58.235 "4810a96d-f17e-4489-b3a9-bc3460d17840" 00:11:58.235 ], 00:11:58.235 "product_name": "Malloc disk", 00:11:58.235 "block_size": 512, 00:11:58.235 "num_blocks": 65536, 00:11:58.235 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:58.235 "assigned_rate_limits": { 00:11:58.235 "rw_ios_per_sec": 0, 00:11:58.235 "rw_mbytes_per_sec": 0, 00:11:58.235 "r_mbytes_per_sec": 0, 00:11:58.235 "w_mbytes_per_sec": 0 00:11:58.235 }, 00:11:58.235 "claimed": true, 00:11:58.235 "claim_type": "exclusive_write", 00:11:58.235 "zoned": false, 00:11:58.235 "supported_io_types": { 00:11:58.235 "read": true, 00:11:58.235 "write": true, 00:11:58.235 "unmap": true, 00:11:58.235 "flush": true, 00:11:58.235 "reset": true, 00:11:58.235 "nvme_admin": false, 00:11:58.235 "nvme_io": false, 00:11:58.235 "nvme_io_md": false, 00:11:58.235 "write_zeroes": true, 00:11:58.235 "zcopy": true, 00:11:58.235 "get_zone_info": false, 00:11:58.235 "zone_management": false, 00:11:58.235 "zone_append": false, 00:11:58.235 "compare": false, 00:11:58.235 "compare_and_write": false, 00:11:58.235 "abort": true, 00:11:58.235 "seek_hole": false, 00:11:58.235 "seek_data": false, 00:11:58.235 "copy": true, 00:11:58.235 "nvme_iov_md": false 00:11:58.235 }, 00:11:58.235 "memory_domains": [ 00:11:58.235 { 00:11:58.235 "dma_device_id": "system", 00:11:58.235 "dma_device_type": 1 00:11:58.235 }, 00:11:58.235 { 00:11:58.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.235 "dma_device_type": 2 00:11:58.235 } 00:11:58.235 ], 00:11:58.235 "driver_specific": {} 00:11:58.235 } 00:11:58.235 ] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.235 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.495 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.495 "name": "Existed_Raid", 00:11:58.495 "uuid": "d6f82ef6-7136-44f3-a665-755443033d82", 00:11:58.495 "strip_size_kb": 64, 00:11:58.495 "state": "online", 00:11:58.495 "raid_level": "raid0", 00:11:58.495 "superblock": false, 00:11:58.495 "num_base_bdevs": 4, 00:11:58.495 "num_base_bdevs_discovered": 4, 00:11:58.495 "num_base_bdevs_operational": 4, 00:11:58.495 "base_bdevs_list": [ 00:11:58.495 { 00:11:58.495 "name": "NewBaseBdev", 00:11:58.495 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:58.495 "is_configured": true, 00:11:58.495 "data_offset": 0, 00:11:58.495 "data_size": 65536 00:11:58.495 }, 00:11:58.495 { 00:11:58.495 "name": "BaseBdev2", 00:11:58.495 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:58.495 "is_configured": true, 00:11:58.495 "data_offset": 0, 00:11:58.495 "data_size": 65536 00:11:58.495 }, 00:11:58.495 { 00:11:58.495 "name": "BaseBdev3", 00:11:58.495 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:58.495 "is_configured": true, 00:11:58.495 "data_offset": 0, 00:11:58.495 "data_size": 65536 00:11:58.495 }, 00:11:58.495 { 00:11:58.495 "name": "BaseBdev4", 00:11:58.495 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:58.495 "is_configured": true, 00:11:58.495 "data_offset": 0, 00:11:58.495 "data_size": 65536 00:11:58.495 } 00:11:58.495 ] 00:11:58.495 }' 00:11:58.495 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.495 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.755 [2024-11-27 09:48:59.808590] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.755 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.755 "name": "Existed_Raid", 00:11:58.755 "aliases": [ 00:11:58.755 "d6f82ef6-7136-44f3-a665-755443033d82" 00:11:58.755 ], 00:11:58.755 "product_name": "Raid Volume", 00:11:58.755 "block_size": 512, 00:11:58.755 "num_blocks": 262144, 00:11:58.755 "uuid": "d6f82ef6-7136-44f3-a665-755443033d82", 00:11:58.755 "assigned_rate_limits": { 00:11:58.755 "rw_ios_per_sec": 0, 00:11:58.755 "rw_mbytes_per_sec": 0, 00:11:58.755 "r_mbytes_per_sec": 0, 00:11:58.755 "w_mbytes_per_sec": 0 00:11:58.755 }, 00:11:58.755 "claimed": false, 00:11:58.755 "zoned": false, 00:11:58.755 "supported_io_types": { 00:11:58.755 "read": true, 00:11:58.755 "write": true, 00:11:58.755 "unmap": true, 00:11:58.755 "flush": true, 00:11:58.755 "reset": true, 00:11:58.755 "nvme_admin": false, 00:11:58.755 "nvme_io": false, 00:11:58.755 "nvme_io_md": false, 00:11:58.755 "write_zeroes": true, 00:11:58.755 "zcopy": false, 00:11:58.755 "get_zone_info": false, 00:11:58.755 "zone_management": false, 00:11:58.755 "zone_append": false, 00:11:58.755 "compare": false, 00:11:58.755 "compare_and_write": false, 00:11:58.755 "abort": false, 00:11:58.755 "seek_hole": false, 00:11:58.755 "seek_data": false, 00:11:58.755 "copy": false, 00:11:58.755 "nvme_iov_md": false 00:11:58.755 }, 00:11:58.755 "memory_domains": [ 00:11:58.755 { 00:11:58.755 "dma_device_id": "system", 00:11:58.756 "dma_device_type": 1 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.756 "dma_device_type": 2 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "system", 00:11:58.756 "dma_device_type": 1 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.756 "dma_device_type": 2 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "system", 00:11:58.756 "dma_device_type": 1 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.756 "dma_device_type": 2 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "system", 00:11:58.756 "dma_device_type": 1 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.756 "dma_device_type": 2 00:11:58.756 } 00:11:58.756 ], 00:11:58.756 "driver_specific": { 00:11:58.756 "raid": { 00:11:58.756 "uuid": "d6f82ef6-7136-44f3-a665-755443033d82", 00:11:58.756 "strip_size_kb": 64, 00:11:58.756 "state": "online", 00:11:58.756 "raid_level": "raid0", 00:11:58.756 "superblock": false, 00:11:58.756 "num_base_bdevs": 4, 00:11:58.756 "num_base_bdevs_discovered": 4, 00:11:58.756 "num_base_bdevs_operational": 4, 00:11:58.756 "base_bdevs_list": [ 00:11:58.756 { 00:11:58.756 "name": "NewBaseBdev", 00:11:58.756 "uuid": "4810a96d-f17e-4489-b3a9-bc3460d17840", 00:11:58.756 "is_configured": true, 00:11:58.756 "data_offset": 0, 00:11:58.756 "data_size": 65536 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "name": "BaseBdev2", 00:11:58.756 "uuid": "1bc1360c-4940-4aff-a3ff-27e2e9bdb2b0", 00:11:58.756 "is_configured": true, 00:11:58.756 "data_offset": 0, 00:11:58.756 "data_size": 65536 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "name": "BaseBdev3", 00:11:58.756 "uuid": "7917971f-8794-4e16-82de-78b9f61382c0", 00:11:58.756 "is_configured": true, 00:11:58.756 "data_offset": 0, 00:11:58.756 "data_size": 65536 00:11:58.756 }, 00:11:58.756 { 00:11:58.756 "name": "BaseBdev4", 00:11:58.756 "uuid": "7e2f72ae-dba6-43ba-a2b5-2819edd89480", 00:11:58.756 "is_configured": true, 00:11:58.756 "data_offset": 0, 00:11:58.756 "data_size": 65536 00:11:58.756 } 00:11:58.756 ] 00:11:58.756 } 00:11:58.756 } 00:11:58.756 }' 00:11:58.756 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.756 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:58.756 BaseBdev2 00:11:58.756 BaseBdev3 00:11:58.756 BaseBdev4' 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.016 09:48:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.016 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.016 [2024-11-27 09:49:00.143624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.016 [2024-11-27 09:49:00.143659] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.016 [2024-11-27 09:49:00.143759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.016 [2024-11-27 09:49:00.143837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.016 [2024-11-27 09:49:00.143848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69635 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69635 ']' 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69635 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69635 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69635' 00:11:59.276 killing process with pid 69635 00:11:59.276 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69635 00:11:59.276 [2024-11-27 09:49:00.182935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.277 09:49:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69635 00:11:59.536 [2024-11-27 09:49:00.622627] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:00.919 00:12:00.919 real 0m11.930s 00:12:00.919 user 0m18.569s 00:12:00.919 sys 0m2.297s 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.919 ************************************ 00:12:00.919 END TEST raid_state_function_test 00:12:00.919 ************************************ 00:12:00.919 09:49:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:00.919 09:49:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.919 09:49:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.919 09:49:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.919 ************************************ 00:12:00.919 START TEST raid_state_function_test_sb 00:12:00.919 ************************************ 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70312 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70312' 00:12:00.919 Process raid pid: 70312 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70312 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70312 ']' 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.919 09:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.178 [2024-11-27 09:49:02.051683] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:01.179 [2024-11-27 09:49:02.051825] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.179 [2024-11-27 09:49:02.234529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.438 [2024-11-27 09:49:02.373297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.699 [2024-11-27 09:49:02.623001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.699 [2024-11-27 09:49:02.623141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 [2024-11-27 09:49:02.959076] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.959 [2024-11-27 09:49:02.959142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.959 [2024-11-27 09:49:02.959154] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.959 [2024-11-27 09:49:02.959166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.959 [2024-11-27 09:49:02.959173] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.959 [2024-11-27 09:49:02.959185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.959 [2024-11-27 09:49:02.959193] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.959 [2024-11-27 09:49:02.959204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 09:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.959 "name": "Existed_Raid", 00:12:01.959 "uuid": "d4b7f0a5-fd26-4c4c-aa8f-bd92f595e1e4", 00:12:01.959 "strip_size_kb": 64, 00:12:01.959 "state": "configuring", 00:12:01.959 "raid_level": "raid0", 00:12:01.959 "superblock": true, 00:12:01.959 "num_base_bdevs": 4, 00:12:01.959 "num_base_bdevs_discovered": 0, 00:12:01.959 "num_base_bdevs_operational": 4, 00:12:01.959 "base_bdevs_list": [ 00:12:01.959 { 00:12:01.959 "name": "BaseBdev1", 00:12:01.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.959 "is_configured": false, 00:12:01.959 "data_offset": 0, 00:12:01.959 "data_size": 0 00:12:01.959 }, 00:12:01.959 { 00:12:01.959 "name": "BaseBdev2", 00:12:01.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.959 "is_configured": false, 00:12:01.959 "data_offset": 0, 00:12:01.959 "data_size": 0 00:12:01.959 }, 00:12:01.959 { 00:12:01.959 "name": "BaseBdev3", 00:12:01.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.959 "is_configured": false, 00:12:01.959 "data_offset": 0, 00:12:01.959 "data_size": 0 00:12:01.959 }, 00:12:01.959 { 00:12:01.959 "name": "BaseBdev4", 00:12:01.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.959 "is_configured": false, 00:12:01.959 "data_offset": 0, 00:12:01.959 "data_size": 0 00:12:01.959 } 00:12:01.959 ] 00:12:01.959 }' 00:12:01.959 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.959 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.529 [2024-11-27 09:49:03.418216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:02.529 [2024-11-27 09:49:03.418324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.529 [2024-11-27 09:49:03.430208] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:02.529 [2024-11-27 09:49:03.430308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:02.529 [2024-11-27 09:49:03.430347] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:02.529 [2024-11-27 09:49:03.430373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:02.529 [2024-11-27 09:49:03.430404] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:02.529 [2024-11-27 09:49:03.430428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:02.529 [2024-11-27 09:49:03.430452] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:02.529 [2024-11-27 09:49:03.430475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.529 [2024-11-27 09:49:03.487909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.529 BaseBdev1 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.529 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.530 [ 00:12:02.530 { 00:12:02.530 "name": "BaseBdev1", 00:12:02.530 "aliases": [ 00:12:02.530 "20c00520-da7c-46fc-bd2d-732c7e1d7666" 00:12:02.530 ], 00:12:02.530 "product_name": "Malloc disk", 00:12:02.530 "block_size": 512, 00:12:02.530 "num_blocks": 65536, 00:12:02.530 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:02.530 "assigned_rate_limits": { 00:12:02.530 "rw_ios_per_sec": 0, 00:12:02.530 "rw_mbytes_per_sec": 0, 00:12:02.530 "r_mbytes_per_sec": 0, 00:12:02.530 "w_mbytes_per_sec": 0 00:12:02.530 }, 00:12:02.530 "claimed": true, 00:12:02.530 "claim_type": "exclusive_write", 00:12:02.530 "zoned": false, 00:12:02.530 "supported_io_types": { 00:12:02.530 "read": true, 00:12:02.530 "write": true, 00:12:02.530 "unmap": true, 00:12:02.530 "flush": true, 00:12:02.530 "reset": true, 00:12:02.530 "nvme_admin": false, 00:12:02.530 "nvme_io": false, 00:12:02.530 "nvme_io_md": false, 00:12:02.530 "write_zeroes": true, 00:12:02.530 "zcopy": true, 00:12:02.530 "get_zone_info": false, 00:12:02.530 "zone_management": false, 00:12:02.530 "zone_append": false, 00:12:02.530 "compare": false, 00:12:02.530 "compare_and_write": false, 00:12:02.530 "abort": true, 00:12:02.530 "seek_hole": false, 00:12:02.530 "seek_data": false, 00:12:02.530 "copy": true, 00:12:02.530 "nvme_iov_md": false 00:12:02.530 }, 00:12:02.530 "memory_domains": [ 00:12:02.530 { 00:12:02.530 "dma_device_id": "system", 00:12:02.530 "dma_device_type": 1 00:12:02.530 }, 00:12:02.530 { 00:12:02.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.530 "dma_device_type": 2 00:12:02.530 } 00:12:02.530 ], 00:12:02.530 "driver_specific": {} 00:12:02.530 } 00:12:02.530 ] 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.530 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.530 "name": "Existed_Raid", 00:12:02.530 "uuid": "a24371f8-be62-48e9-81ff-415c6f9efd7b", 00:12:02.530 "strip_size_kb": 64, 00:12:02.530 "state": "configuring", 00:12:02.530 "raid_level": "raid0", 00:12:02.530 "superblock": true, 00:12:02.530 "num_base_bdevs": 4, 00:12:02.530 "num_base_bdevs_discovered": 1, 00:12:02.530 "num_base_bdevs_operational": 4, 00:12:02.530 "base_bdevs_list": [ 00:12:02.530 { 00:12:02.530 "name": "BaseBdev1", 00:12:02.530 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:02.530 "is_configured": true, 00:12:02.530 "data_offset": 2048, 00:12:02.530 "data_size": 63488 00:12:02.530 }, 00:12:02.530 { 00:12:02.530 "name": "BaseBdev2", 00:12:02.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.530 "is_configured": false, 00:12:02.530 "data_offset": 0, 00:12:02.530 "data_size": 0 00:12:02.530 }, 00:12:02.530 { 00:12:02.530 "name": "BaseBdev3", 00:12:02.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.530 "is_configured": false, 00:12:02.530 "data_offset": 0, 00:12:02.530 "data_size": 0 00:12:02.530 }, 00:12:02.530 { 00:12:02.530 "name": "BaseBdev4", 00:12:02.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.530 "is_configured": false, 00:12:02.530 "data_offset": 0, 00:12:02.530 "data_size": 0 00:12:02.530 } 00:12:02.530 ] 00:12:02.530 }' 00:12:02.531 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.531 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.791 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:02.791 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.791 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.052 [2024-11-27 09:49:03.927177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.052 [2024-11-27 09:49:03.927321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.052 [2024-11-27 09:49:03.939212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:03.052 [2024-11-27 09:49:03.941416] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:03.052 [2024-11-27 09:49:03.941499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:03.052 [2024-11-27 09:49:03.941515] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:03.052 [2024-11-27 09:49:03.941527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:03.052 [2024-11-27 09:49:03.941534] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:03.052 [2024-11-27 09:49:03.941542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.052 "name": "Existed_Raid", 00:12:03.052 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:03.052 "strip_size_kb": 64, 00:12:03.052 "state": "configuring", 00:12:03.052 "raid_level": "raid0", 00:12:03.052 "superblock": true, 00:12:03.052 "num_base_bdevs": 4, 00:12:03.052 "num_base_bdevs_discovered": 1, 00:12:03.052 "num_base_bdevs_operational": 4, 00:12:03.052 "base_bdevs_list": [ 00:12:03.052 { 00:12:03.052 "name": "BaseBdev1", 00:12:03.052 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:03.052 "is_configured": true, 00:12:03.052 "data_offset": 2048, 00:12:03.052 "data_size": 63488 00:12:03.052 }, 00:12:03.052 { 00:12:03.052 "name": "BaseBdev2", 00:12:03.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.052 "is_configured": false, 00:12:03.052 "data_offset": 0, 00:12:03.052 "data_size": 0 00:12:03.052 }, 00:12:03.052 { 00:12:03.052 "name": "BaseBdev3", 00:12:03.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.052 "is_configured": false, 00:12:03.052 "data_offset": 0, 00:12:03.052 "data_size": 0 00:12:03.052 }, 00:12:03.052 { 00:12:03.052 "name": "BaseBdev4", 00:12:03.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.052 "is_configured": false, 00:12:03.052 "data_offset": 0, 00:12:03.052 "data_size": 0 00:12:03.052 } 00:12:03.052 ] 00:12:03.052 }' 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.052 09:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.312 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:03.312 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.312 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.312 [2024-11-27 09:49:04.442591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.312 BaseBdev2 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.573 [ 00:12:03.573 { 00:12:03.573 "name": "BaseBdev2", 00:12:03.573 "aliases": [ 00:12:03.573 "45f0cf96-137b-4789-8c70-bc5f341d2190" 00:12:03.573 ], 00:12:03.573 "product_name": "Malloc disk", 00:12:03.573 "block_size": 512, 00:12:03.573 "num_blocks": 65536, 00:12:03.573 "uuid": "45f0cf96-137b-4789-8c70-bc5f341d2190", 00:12:03.573 "assigned_rate_limits": { 00:12:03.573 "rw_ios_per_sec": 0, 00:12:03.573 "rw_mbytes_per_sec": 0, 00:12:03.573 "r_mbytes_per_sec": 0, 00:12:03.573 "w_mbytes_per_sec": 0 00:12:03.573 }, 00:12:03.573 "claimed": true, 00:12:03.573 "claim_type": "exclusive_write", 00:12:03.573 "zoned": false, 00:12:03.573 "supported_io_types": { 00:12:03.573 "read": true, 00:12:03.573 "write": true, 00:12:03.573 "unmap": true, 00:12:03.573 "flush": true, 00:12:03.573 "reset": true, 00:12:03.573 "nvme_admin": false, 00:12:03.573 "nvme_io": false, 00:12:03.573 "nvme_io_md": false, 00:12:03.573 "write_zeroes": true, 00:12:03.573 "zcopy": true, 00:12:03.573 "get_zone_info": false, 00:12:03.573 "zone_management": false, 00:12:03.573 "zone_append": false, 00:12:03.573 "compare": false, 00:12:03.573 "compare_and_write": false, 00:12:03.573 "abort": true, 00:12:03.573 "seek_hole": false, 00:12:03.573 "seek_data": false, 00:12:03.573 "copy": true, 00:12:03.573 "nvme_iov_md": false 00:12:03.573 }, 00:12:03.573 "memory_domains": [ 00:12:03.573 { 00:12:03.573 "dma_device_id": "system", 00:12:03.573 "dma_device_type": 1 00:12:03.573 }, 00:12:03.573 { 00:12:03.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.573 "dma_device_type": 2 00:12:03.573 } 00:12:03.573 ], 00:12:03.573 "driver_specific": {} 00:12:03.573 } 00:12:03.573 ] 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.573 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.574 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.574 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.574 "name": "Existed_Raid", 00:12:03.574 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:03.574 "strip_size_kb": 64, 00:12:03.574 "state": "configuring", 00:12:03.574 "raid_level": "raid0", 00:12:03.574 "superblock": true, 00:12:03.574 "num_base_bdevs": 4, 00:12:03.574 "num_base_bdevs_discovered": 2, 00:12:03.574 "num_base_bdevs_operational": 4, 00:12:03.574 "base_bdevs_list": [ 00:12:03.574 { 00:12:03.574 "name": "BaseBdev1", 00:12:03.574 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:03.574 "is_configured": true, 00:12:03.574 "data_offset": 2048, 00:12:03.574 "data_size": 63488 00:12:03.574 }, 00:12:03.574 { 00:12:03.574 "name": "BaseBdev2", 00:12:03.574 "uuid": "45f0cf96-137b-4789-8c70-bc5f341d2190", 00:12:03.574 "is_configured": true, 00:12:03.574 "data_offset": 2048, 00:12:03.574 "data_size": 63488 00:12:03.574 }, 00:12:03.574 { 00:12:03.574 "name": "BaseBdev3", 00:12:03.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.574 "is_configured": false, 00:12:03.574 "data_offset": 0, 00:12:03.574 "data_size": 0 00:12:03.574 }, 00:12:03.574 { 00:12:03.574 "name": "BaseBdev4", 00:12:03.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.574 "is_configured": false, 00:12:03.574 "data_offset": 0, 00:12:03.574 "data_size": 0 00:12:03.574 } 00:12:03.574 ] 00:12:03.574 }' 00:12:03.574 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.574 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.834 09:49:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.834 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.834 09:49:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.094 [2024-11-27 09:49:05.019926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:04.094 BaseBdev3 00:12:04.094 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.094 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.095 [ 00:12:04.095 { 00:12:04.095 "name": "BaseBdev3", 00:12:04.095 "aliases": [ 00:12:04.095 "af14d777-e9f5-403d-ae53-ea20b0517e6e" 00:12:04.095 ], 00:12:04.095 "product_name": "Malloc disk", 00:12:04.095 "block_size": 512, 00:12:04.095 "num_blocks": 65536, 00:12:04.095 "uuid": "af14d777-e9f5-403d-ae53-ea20b0517e6e", 00:12:04.095 "assigned_rate_limits": { 00:12:04.095 "rw_ios_per_sec": 0, 00:12:04.095 "rw_mbytes_per_sec": 0, 00:12:04.095 "r_mbytes_per_sec": 0, 00:12:04.095 "w_mbytes_per_sec": 0 00:12:04.095 }, 00:12:04.095 "claimed": true, 00:12:04.095 "claim_type": "exclusive_write", 00:12:04.095 "zoned": false, 00:12:04.095 "supported_io_types": { 00:12:04.095 "read": true, 00:12:04.095 "write": true, 00:12:04.095 "unmap": true, 00:12:04.095 "flush": true, 00:12:04.095 "reset": true, 00:12:04.095 "nvme_admin": false, 00:12:04.095 "nvme_io": false, 00:12:04.095 "nvme_io_md": false, 00:12:04.095 "write_zeroes": true, 00:12:04.095 "zcopy": true, 00:12:04.095 "get_zone_info": false, 00:12:04.095 "zone_management": false, 00:12:04.095 "zone_append": false, 00:12:04.095 "compare": false, 00:12:04.095 "compare_and_write": false, 00:12:04.095 "abort": true, 00:12:04.095 "seek_hole": false, 00:12:04.095 "seek_data": false, 00:12:04.095 "copy": true, 00:12:04.095 "nvme_iov_md": false 00:12:04.095 }, 00:12:04.095 "memory_domains": [ 00:12:04.095 { 00:12:04.095 "dma_device_id": "system", 00:12:04.095 "dma_device_type": 1 00:12:04.095 }, 00:12:04.095 { 00:12:04.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.095 "dma_device_type": 2 00:12:04.095 } 00:12:04.095 ], 00:12:04.095 "driver_specific": {} 00:12:04.095 } 00:12:04.095 ] 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.095 "name": "Existed_Raid", 00:12:04.095 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:04.095 "strip_size_kb": 64, 00:12:04.095 "state": "configuring", 00:12:04.095 "raid_level": "raid0", 00:12:04.095 "superblock": true, 00:12:04.095 "num_base_bdevs": 4, 00:12:04.095 "num_base_bdevs_discovered": 3, 00:12:04.095 "num_base_bdevs_operational": 4, 00:12:04.095 "base_bdevs_list": [ 00:12:04.095 { 00:12:04.095 "name": "BaseBdev1", 00:12:04.095 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:04.095 "is_configured": true, 00:12:04.095 "data_offset": 2048, 00:12:04.095 "data_size": 63488 00:12:04.095 }, 00:12:04.095 { 00:12:04.095 "name": "BaseBdev2", 00:12:04.095 "uuid": "45f0cf96-137b-4789-8c70-bc5f341d2190", 00:12:04.095 "is_configured": true, 00:12:04.095 "data_offset": 2048, 00:12:04.095 "data_size": 63488 00:12:04.095 }, 00:12:04.095 { 00:12:04.095 "name": "BaseBdev3", 00:12:04.095 "uuid": "af14d777-e9f5-403d-ae53-ea20b0517e6e", 00:12:04.095 "is_configured": true, 00:12:04.095 "data_offset": 2048, 00:12:04.095 "data_size": 63488 00:12:04.095 }, 00:12:04.095 { 00:12:04.095 "name": "BaseBdev4", 00:12:04.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.095 "is_configured": false, 00:12:04.095 "data_offset": 0, 00:12:04.095 "data_size": 0 00:12:04.095 } 00:12:04.095 ] 00:12:04.095 }' 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.095 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.666 BaseBdev4 00:12:04.666 [2024-11-27 09:49:05.568269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.666 [2024-11-27 09:49:05.568574] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:04.666 [2024-11-27 09:49:05.568591] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:04.666 [2024-11-27 09:49:05.568891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:04.666 [2024-11-27 09:49:05.569080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:04.666 [2024-11-27 09:49:05.569110] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:04.666 [2024-11-27 09:49:05.569287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.666 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.666 [ 00:12:04.666 { 00:12:04.666 "name": "BaseBdev4", 00:12:04.666 "aliases": [ 00:12:04.666 "ccc732a0-6d42-4b95-a96f-4f5a2c32e1c0" 00:12:04.666 ], 00:12:04.666 "product_name": "Malloc disk", 00:12:04.666 "block_size": 512, 00:12:04.666 "num_blocks": 65536, 00:12:04.666 "uuid": "ccc732a0-6d42-4b95-a96f-4f5a2c32e1c0", 00:12:04.666 "assigned_rate_limits": { 00:12:04.666 "rw_ios_per_sec": 0, 00:12:04.666 "rw_mbytes_per_sec": 0, 00:12:04.666 "r_mbytes_per_sec": 0, 00:12:04.666 "w_mbytes_per_sec": 0 00:12:04.666 }, 00:12:04.666 "claimed": true, 00:12:04.666 "claim_type": "exclusive_write", 00:12:04.666 "zoned": false, 00:12:04.666 "supported_io_types": { 00:12:04.666 "read": true, 00:12:04.666 "write": true, 00:12:04.667 "unmap": true, 00:12:04.667 "flush": true, 00:12:04.667 "reset": true, 00:12:04.667 "nvme_admin": false, 00:12:04.667 "nvme_io": false, 00:12:04.667 "nvme_io_md": false, 00:12:04.667 "write_zeroes": true, 00:12:04.667 "zcopy": true, 00:12:04.667 "get_zone_info": false, 00:12:04.667 "zone_management": false, 00:12:04.667 "zone_append": false, 00:12:04.667 "compare": false, 00:12:04.667 "compare_and_write": false, 00:12:04.667 "abort": true, 00:12:04.667 "seek_hole": false, 00:12:04.667 "seek_data": false, 00:12:04.667 "copy": true, 00:12:04.667 "nvme_iov_md": false 00:12:04.667 }, 00:12:04.667 "memory_domains": [ 00:12:04.667 { 00:12:04.667 "dma_device_id": "system", 00:12:04.667 "dma_device_type": 1 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.667 "dma_device_type": 2 00:12:04.667 } 00:12:04.667 ], 00:12:04.667 "driver_specific": {} 00:12:04.667 } 00:12:04.667 ] 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.667 "name": "Existed_Raid", 00:12:04.667 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:04.667 "strip_size_kb": 64, 00:12:04.667 "state": "online", 00:12:04.667 "raid_level": "raid0", 00:12:04.667 "superblock": true, 00:12:04.667 "num_base_bdevs": 4, 00:12:04.667 "num_base_bdevs_discovered": 4, 00:12:04.667 "num_base_bdevs_operational": 4, 00:12:04.667 "base_bdevs_list": [ 00:12:04.667 { 00:12:04.667 "name": "BaseBdev1", 00:12:04.667 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:04.667 "is_configured": true, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "name": "BaseBdev2", 00:12:04.667 "uuid": "45f0cf96-137b-4789-8c70-bc5f341d2190", 00:12:04.667 "is_configured": true, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "name": "BaseBdev3", 00:12:04.667 "uuid": "af14d777-e9f5-403d-ae53-ea20b0517e6e", 00:12:04.667 "is_configured": true, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 }, 00:12:04.667 { 00:12:04.667 "name": "BaseBdev4", 00:12:04.667 "uuid": "ccc732a0-6d42-4b95-a96f-4f5a2c32e1c0", 00:12:04.667 "is_configured": true, 00:12:04.667 "data_offset": 2048, 00:12:04.667 "data_size": 63488 00:12:04.667 } 00:12:04.667 ] 00:12:04.667 }' 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.667 09:49:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.237 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.238 [2024-11-27 09:49:06.087842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.238 "name": "Existed_Raid", 00:12:05.238 "aliases": [ 00:12:05.238 "fe6f1f51-ee56-4629-a7dc-09ea6a51a766" 00:12:05.238 ], 00:12:05.238 "product_name": "Raid Volume", 00:12:05.238 "block_size": 512, 00:12:05.238 "num_blocks": 253952, 00:12:05.238 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:05.238 "assigned_rate_limits": { 00:12:05.238 "rw_ios_per_sec": 0, 00:12:05.238 "rw_mbytes_per_sec": 0, 00:12:05.238 "r_mbytes_per_sec": 0, 00:12:05.238 "w_mbytes_per_sec": 0 00:12:05.238 }, 00:12:05.238 "claimed": false, 00:12:05.238 "zoned": false, 00:12:05.238 "supported_io_types": { 00:12:05.238 "read": true, 00:12:05.238 "write": true, 00:12:05.238 "unmap": true, 00:12:05.238 "flush": true, 00:12:05.238 "reset": true, 00:12:05.238 "nvme_admin": false, 00:12:05.238 "nvme_io": false, 00:12:05.238 "nvme_io_md": false, 00:12:05.238 "write_zeroes": true, 00:12:05.238 "zcopy": false, 00:12:05.238 "get_zone_info": false, 00:12:05.238 "zone_management": false, 00:12:05.238 "zone_append": false, 00:12:05.238 "compare": false, 00:12:05.238 "compare_and_write": false, 00:12:05.238 "abort": false, 00:12:05.238 "seek_hole": false, 00:12:05.238 "seek_data": false, 00:12:05.238 "copy": false, 00:12:05.238 "nvme_iov_md": false 00:12:05.238 }, 00:12:05.238 "memory_domains": [ 00:12:05.238 { 00:12:05.238 "dma_device_id": "system", 00:12:05.238 "dma_device_type": 1 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.238 "dma_device_type": 2 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "system", 00:12:05.238 "dma_device_type": 1 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.238 "dma_device_type": 2 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "system", 00:12:05.238 "dma_device_type": 1 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.238 "dma_device_type": 2 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "system", 00:12:05.238 "dma_device_type": 1 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.238 "dma_device_type": 2 00:12:05.238 } 00:12:05.238 ], 00:12:05.238 "driver_specific": { 00:12:05.238 "raid": { 00:12:05.238 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:05.238 "strip_size_kb": 64, 00:12:05.238 "state": "online", 00:12:05.238 "raid_level": "raid0", 00:12:05.238 "superblock": true, 00:12:05.238 "num_base_bdevs": 4, 00:12:05.238 "num_base_bdevs_discovered": 4, 00:12:05.238 "num_base_bdevs_operational": 4, 00:12:05.238 "base_bdevs_list": [ 00:12:05.238 { 00:12:05.238 "name": "BaseBdev1", 00:12:05.238 "uuid": "20c00520-da7c-46fc-bd2d-732c7e1d7666", 00:12:05.238 "is_configured": true, 00:12:05.238 "data_offset": 2048, 00:12:05.238 "data_size": 63488 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "name": "BaseBdev2", 00:12:05.238 "uuid": "45f0cf96-137b-4789-8c70-bc5f341d2190", 00:12:05.238 "is_configured": true, 00:12:05.238 "data_offset": 2048, 00:12:05.238 "data_size": 63488 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "name": "BaseBdev3", 00:12:05.238 "uuid": "af14d777-e9f5-403d-ae53-ea20b0517e6e", 00:12:05.238 "is_configured": true, 00:12:05.238 "data_offset": 2048, 00:12:05.238 "data_size": 63488 00:12:05.238 }, 00:12:05.238 { 00:12:05.238 "name": "BaseBdev4", 00:12:05.238 "uuid": "ccc732a0-6d42-4b95-a96f-4f5a2c32e1c0", 00:12:05.238 "is_configured": true, 00:12:05.238 "data_offset": 2048, 00:12:05.238 "data_size": 63488 00:12:05.238 } 00:12:05.238 ] 00:12:05.238 } 00:12:05.238 } 00:12:05.238 }' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:05.238 BaseBdev2 00:12:05.238 BaseBdev3 00:12:05.238 BaseBdev4' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.238 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.499 [2024-11-27 09:49:06.435060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.499 [2024-11-27 09:49:06.435099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.499 [2024-11-27 09:49:06.435163] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.499 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.500 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.500 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.500 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.500 "name": "Existed_Raid", 00:12:05.500 "uuid": "fe6f1f51-ee56-4629-a7dc-09ea6a51a766", 00:12:05.500 "strip_size_kb": 64, 00:12:05.500 "state": "offline", 00:12:05.500 "raid_level": "raid0", 00:12:05.500 "superblock": true, 00:12:05.500 "num_base_bdevs": 4, 00:12:05.500 "num_base_bdevs_discovered": 3, 00:12:05.500 "num_base_bdevs_operational": 3, 00:12:05.500 "base_bdevs_list": [ 00:12:05.500 { 00:12:05.500 "name": null, 00:12:05.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.500 "is_configured": false, 00:12:05.500 "data_offset": 0, 00:12:05.500 "data_size": 63488 00:12:05.500 }, 00:12:05.500 { 00:12:05.500 "name": "BaseBdev2", 00:12:05.500 "uuid": "45f0cf96-137b-4789-8c70-bc5f341d2190", 00:12:05.500 "is_configured": true, 00:12:05.500 "data_offset": 2048, 00:12:05.500 "data_size": 63488 00:12:05.500 }, 00:12:05.500 { 00:12:05.500 "name": "BaseBdev3", 00:12:05.500 "uuid": "af14d777-e9f5-403d-ae53-ea20b0517e6e", 00:12:05.500 "is_configured": true, 00:12:05.500 "data_offset": 2048, 00:12:05.500 "data_size": 63488 00:12:05.500 }, 00:12:05.500 { 00:12:05.500 "name": "BaseBdev4", 00:12:05.500 "uuid": "ccc732a0-6d42-4b95-a96f-4f5a2c32e1c0", 00:12:05.500 "is_configured": true, 00:12:05.500 "data_offset": 2048, 00:12:05.500 "data_size": 63488 00:12:05.500 } 00:12:05.500 ] 00:12:05.500 }' 00:12:05.500 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.500 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.070 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:06.070 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.070 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.070 09:49:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.070 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.070 09:49:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.070 [2024-11-27 09:49:07.049301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.070 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.330 [2024-11-27 09:49:07.210922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.330 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:06.331 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.331 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.331 [2024-11-27 09:49:07.382789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:06.331 [2024-11-27 09:49:07.382858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.591 BaseBdev2 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.591 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 [ 00:12:06.592 { 00:12:06.592 "name": "BaseBdev2", 00:12:06.592 "aliases": [ 00:12:06.592 "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de" 00:12:06.592 ], 00:12:06.592 "product_name": "Malloc disk", 00:12:06.592 "block_size": 512, 00:12:06.592 "num_blocks": 65536, 00:12:06.592 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:06.592 "assigned_rate_limits": { 00:12:06.592 "rw_ios_per_sec": 0, 00:12:06.592 "rw_mbytes_per_sec": 0, 00:12:06.592 "r_mbytes_per_sec": 0, 00:12:06.592 "w_mbytes_per_sec": 0 00:12:06.592 }, 00:12:06.592 "claimed": false, 00:12:06.592 "zoned": false, 00:12:06.592 "supported_io_types": { 00:12:06.592 "read": true, 00:12:06.592 "write": true, 00:12:06.592 "unmap": true, 00:12:06.592 "flush": true, 00:12:06.592 "reset": true, 00:12:06.592 "nvme_admin": false, 00:12:06.592 "nvme_io": false, 00:12:06.592 "nvme_io_md": false, 00:12:06.592 "write_zeroes": true, 00:12:06.592 "zcopy": true, 00:12:06.592 "get_zone_info": false, 00:12:06.592 "zone_management": false, 00:12:06.592 "zone_append": false, 00:12:06.592 "compare": false, 00:12:06.592 "compare_and_write": false, 00:12:06.592 "abort": true, 00:12:06.592 "seek_hole": false, 00:12:06.592 "seek_data": false, 00:12:06.592 "copy": true, 00:12:06.592 "nvme_iov_md": false 00:12:06.592 }, 00:12:06.592 "memory_domains": [ 00:12:06.592 { 00:12:06.592 "dma_device_id": "system", 00:12:06.592 "dma_device_type": 1 00:12:06.592 }, 00:12:06.592 { 00:12:06.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.592 "dma_device_type": 2 00:12:06.592 } 00:12:06.592 ], 00:12:06.592 "driver_specific": {} 00:12:06.592 } 00:12:06.592 ] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 BaseBdev3 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.592 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.592 [ 00:12:06.592 { 00:12:06.592 "name": "BaseBdev3", 00:12:06.592 "aliases": [ 00:12:06.592 "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb" 00:12:06.592 ], 00:12:06.592 "product_name": "Malloc disk", 00:12:06.592 "block_size": 512, 00:12:06.592 "num_blocks": 65536, 00:12:06.592 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:06.592 "assigned_rate_limits": { 00:12:06.592 "rw_ios_per_sec": 0, 00:12:06.592 "rw_mbytes_per_sec": 0, 00:12:06.592 "r_mbytes_per_sec": 0, 00:12:06.592 "w_mbytes_per_sec": 0 00:12:06.592 }, 00:12:06.592 "claimed": false, 00:12:06.592 "zoned": false, 00:12:06.592 "supported_io_types": { 00:12:06.592 "read": true, 00:12:06.592 "write": true, 00:12:06.592 "unmap": true, 00:12:06.592 "flush": true, 00:12:06.592 "reset": true, 00:12:06.592 "nvme_admin": false, 00:12:06.592 "nvme_io": false, 00:12:06.592 "nvme_io_md": false, 00:12:06.592 "write_zeroes": true, 00:12:06.592 "zcopy": true, 00:12:06.592 "get_zone_info": false, 00:12:06.592 "zone_management": false, 00:12:06.592 "zone_append": false, 00:12:06.592 "compare": false, 00:12:06.592 "compare_and_write": false, 00:12:06.592 "abort": true, 00:12:06.592 "seek_hole": false, 00:12:06.592 "seek_data": false, 00:12:06.592 "copy": true, 00:12:06.592 "nvme_iov_md": false 00:12:06.592 }, 00:12:06.592 "memory_domains": [ 00:12:06.592 { 00:12:06.592 "dma_device_id": "system", 00:12:06.592 "dma_device_type": 1 00:12:06.592 }, 00:12:06.592 { 00:12:06.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.592 "dma_device_type": 2 00:12:06.592 } 00:12:06.853 ], 00:12:06.853 "driver_specific": {} 00:12:06.853 } 00:12:06.853 ] 00:12:06.853 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.853 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.854 BaseBdev4 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.854 [ 00:12:06.854 { 00:12:06.854 "name": "BaseBdev4", 00:12:06.854 "aliases": [ 00:12:06.854 "002545ea-29ad-4ebd-ad79-7adb964b6d2e" 00:12:06.854 ], 00:12:06.854 "product_name": "Malloc disk", 00:12:06.854 "block_size": 512, 00:12:06.854 "num_blocks": 65536, 00:12:06.854 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:06.854 "assigned_rate_limits": { 00:12:06.854 "rw_ios_per_sec": 0, 00:12:06.854 "rw_mbytes_per_sec": 0, 00:12:06.854 "r_mbytes_per_sec": 0, 00:12:06.854 "w_mbytes_per_sec": 0 00:12:06.854 }, 00:12:06.854 "claimed": false, 00:12:06.854 "zoned": false, 00:12:06.854 "supported_io_types": { 00:12:06.854 "read": true, 00:12:06.854 "write": true, 00:12:06.854 "unmap": true, 00:12:06.854 "flush": true, 00:12:06.854 "reset": true, 00:12:06.854 "nvme_admin": false, 00:12:06.854 "nvme_io": false, 00:12:06.854 "nvme_io_md": false, 00:12:06.854 "write_zeroes": true, 00:12:06.854 "zcopy": true, 00:12:06.854 "get_zone_info": false, 00:12:06.854 "zone_management": false, 00:12:06.854 "zone_append": false, 00:12:06.854 "compare": false, 00:12:06.854 "compare_and_write": false, 00:12:06.854 "abort": true, 00:12:06.854 "seek_hole": false, 00:12:06.854 "seek_data": false, 00:12:06.854 "copy": true, 00:12:06.854 "nvme_iov_md": false 00:12:06.854 }, 00:12:06.854 "memory_domains": [ 00:12:06.854 { 00:12:06.854 "dma_device_id": "system", 00:12:06.854 "dma_device_type": 1 00:12:06.854 }, 00:12:06.854 { 00:12:06.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.854 "dma_device_type": 2 00:12:06.854 } 00:12:06.854 ], 00:12:06.854 "driver_specific": {} 00:12:06.854 } 00:12:06.854 ] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.854 [2024-11-27 09:49:07.819086] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:06.854 [2024-11-27 09:49:07.819195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:06.854 [2024-11-27 09:49:07.819247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:06.854 [2024-11-27 09:49:07.821552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:06.854 [2024-11-27 09:49:07.821656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.854 "name": "Existed_Raid", 00:12:06.854 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:06.854 "strip_size_kb": 64, 00:12:06.854 "state": "configuring", 00:12:06.854 "raid_level": "raid0", 00:12:06.854 "superblock": true, 00:12:06.854 "num_base_bdevs": 4, 00:12:06.854 "num_base_bdevs_discovered": 3, 00:12:06.854 "num_base_bdevs_operational": 4, 00:12:06.854 "base_bdevs_list": [ 00:12:06.854 { 00:12:06.854 "name": "BaseBdev1", 00:12:06.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.854 "is_configured": false, 00:12:06.854 "data_offset": 0, 00:12:06.854 "data_size": 0 00:12:06.854 }, 00:12:06.854 { 00:12:06.854 "name": "BaseBdev2", 00:12:06.854 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:06.854 "is_configured": true, 00:12:06.854 "data_offset": 2048, 00:12:06.854 "data_size": 63488 00:12:06.854 }, 00:12:06.854 { 00:12:06.854 "name": "BaseBdev3", 00:12:06.854 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:06.854 "is_configured": true, 00:12:06.854 "data_offset": 2048, 00:12:06.854 "data_size": 63488 00:12:06.854 }, 00:12:06.854 { 00:12:06.854 "name": "BaseBdev4", 00:12:06.854 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:06.854 "is_configured": true, 00:12:06.854 "data_offset": 2048, 00:12:06.854 "data_size": 63488 00:12:06.854 } 00:12:06.854 ] 00:12:06.854 }' 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.854 09:49:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.425 [2024-11-27 09:49:08.262325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.425 "name": "Existed_Raid", 00:12:07.425 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:07.425 "strip_size_kb": 64, 00:12:07.425 "state": "configuring", 00:12:07.425 "raid_level": "raid0", 00:12:07.425 "superblock": true, 00:12:07.425 "num_base_bdevs": 4, 00:12:07.425 "num_base_bdevs_discovered": 2, 00:12:07.425 "num_base_bdevs_operational": 4, 00:12:07.425 "base_bdevs_list": [ 00:12:07.425 { 00:12:07.425 "name": "BaseBdev1", 00:12:07.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.425 "is_configured": false, 00:12:07.425 "data_offset": 0, 00:12:07.425 "data_size": 0 00:12:07.425 }, 00:12:07.425 { 00:12:07.425 "name": null, 00:12:07.425 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:07.425 "is_configured": false, 00:12:07.425 "data_offset": 0, 00:12:07.425 "data_size": 63488 00:12:07.425 }, 00:12:07.425 { 00:12:07.425 "name": "BaseBdev3", 00:12:07.425 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:07.425 "is_configured": true, 00:12:07.425 "data_offset": 2048, 00:12:07.425 "data_size": 63488 00:12:07.425 }, 00:12:07.425 { 00:12:07.425 "name": "BaseBdev4", 00:12:07.425 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:07.425 "is_configured": true, 00:12:07.425 "data_offset": 2048, 00:12:07.425 "data_size": 63488 00:12:07.425 } 00:12:07.425 ] 00:12:07.425 }' 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.425 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.685 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.686 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.962 [2024-11-27 09:49:08.828269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.962 BaseBdev1 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.962 [ 00:12:07.962 { 00:12:07.962 "name": "BaseBdev1", 00:12:07.962 "aliases": [ 00:12:07.962 "cce9cfc8-a45c-46f1-9924-ae7a36fc5022" 00:12:07.962 ], 00:12:07.962 "product_name": "Malloc disk", 00:12:07.962 "block_size": 512, 00:12:07.962 "num_blocks": 65536, 00:12:07.962 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:07.962 "assigned_rate_limits": { 00:12:07.962 "rw_ios_per_sec": 0, 00:12:07.962 "rw_mbytes_per_sec": 0, 00:12:07.962 "r_mbytes_per_sec": 0, 00:12:07.962 "w_mbytes_per_sec": 0 00:12:07.962 }, 00:12:07.962 "claimed": true, 00:12:07.962 "claim_type": "exclusive_write", 00:12:07.962 "zoned": false, 00:12:07.962 "supported_io_types": { 00:12:07.962 "read": true, 00:12:07.962 "write": true, 00:12:07.962 "unmap": true, 00:12:07.962 "flush": true, 00:12:07.962 "reset": true, 00:12:07.962 "nvme_admin": false, 00:12:07.962 "nvme_io": false, 00:12:07.962 "nvme_io_md": false, 00:12:07.962 "write_zeroes": true, 00:12:07.962 "zcopy": true, 00:12:07.962 "get_zone_info": false, 00:12:07.962 "zone_management": false, 00:12:07.962 "zone_append": false, 00:12:07.962 "compare": false, 00:12:07.962 "compare_and_write": false, 00:12:07.962 "abort": true, 00:12:07.962 "seek_hole": false, 00:12:07.962 "seek_data": false, 00:12:07.962 "copy": true, 00:12:07.962 "nvme_iov_md": false 00:12:07.962 }, 00:12:07.962 "memory_domains": [ 00:12:07.962 { 00:12:07.962 "dma_device_id": "system", 00:12:07.962 "dma_device_type": 1 00:12:07.962 }, 00:12:07.962 { 00:12:07.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.962 "dma_device_type": 2 00:12:07.962 } 00:12:07.962 ], 00:12:07.962 "driver_specific": {} 00:12:07.962 } 00:12:07.962 ] 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.962 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.962 "name": "Existed_Raid", 00:12:07.962 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:07.963 "strip_size_kb": 64, 00:12:07.963 "state": "configuring", 00:12:07.963 "raid_level": "raid0", 00:12:07.963 "superblock": true, 00:12:07.963 "num_base_bdevs": 4, 00:12:07.963 "num_base_bdevs_discovered": 3, 00:12:07.963 "num_base_bdevs_operational": 4, 00:12:07.963 "base_bdevs_list": [ 00:12:07.963 { 00:12:07.963 "name": "BaseBdev1", 00:12:07.963 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:07.963 "is_configured": true, 00:12:07.963 "data_offset": 2048, 00:12:07.963 "data_size": 63488 00:12:07.963 }, 00:12:07.963 { 00:12:07.963 "name": null, 00:12:07.963 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:07.963 "is_configured": false, 00:12:07.963 "data_offset": 0, 00:12:07.963 "data_size": 63488 00:12:07.963 }, 00:12:07.963 { 00:12:07.963 "name": "BaseBdev3", 00:12:07.963 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:07.963 "is_configured": true, 00:12:07.963 "data_offset": 2048, 00:12:07.963 "data_size": 63488 00:12:07.963 }, 00:12:07.963 { 00:12:07.963 "name": "BaseBdev4", 00:12:07.963 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:07.963 "is_configured": true, 00:12:07.963 "data_offset": 2048, 00:12:07.963 "data_size": 63488 00:12:07.963 } 00:12:07.963 ] 00:12:07.963 }' 00:12:07.963 09:49:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.963 09:49:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.229 [2024-11-27 09:49:09.327607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.229 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.230 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.489 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.489 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.489 "name": "Existed_Raid", 00:12:08.489 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:08.489 "strip_size_kb": 64, 00:12:08.489 "state": "configuring", 00:12:08.489 "raid_level": "raid0", 00:12:08.489 "superblock": true, 00:12:08.489 "num_base_bdevs": 4, 00:12:08.489 "num_base_bdevs_discovered": 2, 00:12:08.489 "num_base_bdevs_operational": 4, 00:12:08.489 "base_bdevs_list": [ 00:12:08.489 { 00:12:08.489 "name": "BaseBdev1", 00:12:08.489 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:08.489 "is_configured": true, 00:12:08.489 "data_offset": 2048, 00:12:08.489 "data_size": 63488 00:12:08.489 }, 00:12:08.489 { 00:12:08.489 "name": null, 00:12:08.489 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:08.489 "is_configured": false, 00:12:08.489 "data_offset": 0, 00:12:08.489 "data_size": 63488 00:12:08.489 }, 00:12:08.489 { 00:12:08.489 "name": null, 00:12:08.489 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:08.489 "is_configured": false, 00:12:08.489 "data_offset": 0, 00:12:08.489 "data_size": 63488 00:12:08.489 }, 00:12:08.489 { 00:12:08.489 "name": "BaseBdev4", 00:12:08.489 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:08.489 "is_configured": true, 00:12:08.489 "data_offset": 2048, 00:12:08.489 "data_size": 63488 00:12:08.489 } 00:12:08.489 ] 00:12:08.489 }' 00:12:08.489 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.489 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.750 [2024-11-27 09:49:09.842695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.750 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.009 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.009 "name": "Existed_Raid", 00:12:09.009 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:09.009 "strip_size_kb": 64, 00:12:09.009 "state": "configuring", 00:12:09.009 "raid_level": "raid0", 00:12:09.009 "superblock": true, 00:12:09.009 "num_base_bdevs": 4, 00:12:09.009 "num_base_bdevs_discovered": 3, 00:12:09.009 "num_base_bdevs_operational": 4, 00:12:09.009 "base_bdevs_list": [ 00:12:09.009 { 00:12:09.009 "name": "BaseBdev1", 00:12:09.009 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:09.010 "is_configured": true, 00:12:09.010 "data_offset": 2048, 00:12:09.010 "data_size": 63488 00:12:09.010 }, 00:12:09.010 { 00:12:09.010 "name": null, 00:12:09.010 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:09.010 "is_configured": false, 00:12:09.010 "data_offset": 0, 00:12:09.010 "data_size": 63488 00:12:09.010 }, 00:12:09.010 { 00:12:09.010 "name": "BaseBdev3", 00:12:09.010 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:09.010 "is_configured": true, 00:12:09.010 "data_offset": 2048, 00:12:09.010 "data_size": 63488 00:12:09.010 }, 00:12:09.010 { 00:12:09.010 "name": "BaseBdev4", 00:12:09.010 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:09.010 "is_configured": true, 00:12:09.010 "data_offset": 2048, 00:12:09.010 "data_size": 63488 00:12:09.010 } 00:12:09.010 ] 00:12:09.010 }' 00:12:09.010 09:49:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.010 09:49:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.269 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.270 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.270 [2024-11-27 09:49:10.325905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.530 "name": "Existed_Raid", 00:12:09.530 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:09.530 "strip_size_kb": 64, 00:12:09.530 "state": "configuring", 00:12:09.530 "raid_level": "raid0", 00:12:09.530 "superblock": true, 00:12:09.530 "num_base_bdevs": 4, 00:12:09.530 "num_base_bdevs_discovered": 2, 00:12:09.530 "num_base_bdevs_operational": 4, 00:12:09.530 "base_bdevs_list": [ 00:12:09.530 { 00:12:09.530 "name": null, 00:12:09.530 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:09.530 "is_configured": false, 00:12:09.530 "data_offset": 0, 00:12:09.530 "data_size": 63488 00:12:09.530 }, 00:12:09.530 { 00:12:09.530 "name": null, 00:12:09.530 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:09.530 "is_configured": false, 00:12:09.530 "data_offset": 0, 00:12:09.530 "data_size": 63488 00:12:09.530 }, 00:12:09.530 { 00:12:09.530 "name": "BaseBdev3", 00:12:09.530 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:09.530 "is_configured": true, 00:12:09.530 "data_offset": 2048, 00:12:09.530 "data_size": 63488 00:12:09.530 }, 00:12:09.530 { 00:12:09.530 "name": "BaseBdev4", 00:12:09.530 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:09.530 "is_configured": true, 00:12:09.530 "data_offset": 2048, 00:12:09.530 "data_size": 63488 00:12:09.530 } 00:12:09.530 ] 00:12:09.530 }' 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.530 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.789 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.789 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.789 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:09.789 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.789 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.789 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 [2024-11-27 09:49:10.925396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.049 "name": "Existed_Raid", 00:12:10.049 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:10.049 "strip_size_kb": 64, 00:12:10.049 "state": "configuring", 00:12:10.049 "raid_level": "raid0", 00:12:10.049 "superblock": true, 00:12:10.049 "num_base_bdevs": 4, 00:12:10.049 "num_base_bdevs_discovered": 3, 00:12:10.049 "num_base_bdevs_operational": 4, 00:12:10.049 "base_bdevs_list": [ 00:12:10.049 { 00:12:10.049 "name": null, 00:12:10.049 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:10.049 "is_configured": false, 00:12:10.049 "data_offset": 0, 00:12:10.049 "data_size": 63488 00:12:10.049 }, 00:12:10.049 { 00:12:10.049 "name": "BaseBdev2", 00:12:10.049 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 }, 00:12:10.049 { 00:12:10.049 "name": "BaseBdev3", 00:12:10.049 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 }, 00:12:10.049 { 00:12:10.049 "name": "BaseBdev4", 00:12:10.049 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:10.049 "is_configured": true, 00:12:10.049 "data_offset": 2048, 00:12:10.049 "data_size": 63488 00:12:10.049 } 00:12:10.049 ] 00:12:10.049 }' 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.049 09:49:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.309 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.309 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:10.309 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.309 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.310 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cce9cfc8-a45c-46f1-9924-ae7a36fc5022 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.570 [2024-11-27 09:49:11.499652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:10.570 [2024-11-27 09:49:11.499984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.570 [2024-11-27 09:49:11.500001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:10.570 [2024-11-27 09:49:11.500381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:10.570 [2024-11-27 09:49:11.500545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.570 [2024-11-27 09:49:11.500641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:10.570 NewBaseBdev 00:12:10.570 [2024-11-27 09:49:11.500873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.570 [ 00:12:10.570 { 00:12:10.570 "name": "NewBaseBdev", 00:12:10.570 "aliases": [ 00:12:10.570 "cce9cfc8-a45c-46f1-9924-ae7a36fc5022" 00:12:10.570 ], 00:12:10.570 "product_name": "Malloc disk", 00:12:10.570 "block_size": 512, 00:12:10.570 "num_blocks": 65536, 00:12:10.570 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:10.570 "assigned_rate_limits": { 00:12:10.570 "rw_ios_per_sec": 0, 00:12:10.570 "rw_mbytes_per_sec": 0, 00:12:10.570 "r_mbytes_per_sec": 0, 00:12:10.570 "w_mbytes_per_sec": 0 00:12:10.570 }, 00:12:10.570 "claimed": true, 00:12:10.570 "claim_type": "exclusive_write", 00:12:10.570 "zoned": false, 00:12:10.570 "supported_io_types": { 00:12:10.570 "read": true, 00:12:10.570 "write": true, 00:12:10.570 "unmap": true, 00:12:10.570 "flush": true, 00:12:10.570 "reset": true, 00:12:10.570 "nvme_admin": false, 00:12:10.570 "nvme_io": false, 00:12:10.570 "nvme_io_md": false, 00:12:10.570 "write_zeroes": true, 00:12:10.570 "zcopy": true, 00:12:10.570 "get_zone_info": false, 00:12:10.570 "zone_management": false, 00:12:10.570 "zone_append": false, 00:12:10.570 "compare": false, 00:12:10.570 "compare_and_write": false, 00:12:10.570 "abort": true, 00:12:10.570 "seek_hole": false, 00:12:10.570 "seek_data": false, 00:12:10.570 "copy": true, 00:12:10.570 "nvme_iov_md": false 00:12:10.570 }, 00:12:10.570 "memory_domains": [ 00:12:10.570 { 00:12:10.570 "dma_device_id": "system", 00:12:10.570 "dma_device_type": 1 00:12:10.570 }, 00:12:10.570 { 00:12:10.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.570 "dma_device_type": 2 00:12:10.570 } 00:12:10.570 ], 00:12:10.570 "driver_specific": {} 00:12:10.570 } 00:12:10.570 ] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.570 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.570 "name": "Existed_Raid", 00:12:10.570 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:10.570 "strip_size_kb": 64, 00:12:10.570 "state": "online", 00:12:10.570 "raid_level": "raid0", 00:12:10.570 "superblock": true, 00:12:10.570 "num_base_bdevs": 4, 00:12:10.571 "num_base_bdevs_discovered": 4, 00:12:10.571 "num_base_bdevs_operational": 4, 00:12:10.571 "base_bdevs_list": [ 00:12:10.571 { 00:12:10.571 "name": "NewBaseBdev", 00:12:10.571 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:10.571 "is_configured": true, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 }, 00:12:10.571 { 00:12:10.571 "name": "BaseBdev2", 00:12:10.571 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:10.571 "is_configured": true, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 }, 00:12:10.571 { 00:12:10.571 "name": "BaseBdev3", 00:12:10.571 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:10.571 "is_configured": true, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 }, 00:12:10.571 { 00:12:10.571 "name": "BaseBdev4", 00:12:10.571 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:10.571 "is_configured": true, 00:12:10.571 "data_offset": 2048, 00:12:10.571 "data_size": 63488 00:12:10.571 } 00:12:10.571 ] 00:12:10.571 }' 00:12:10.571 09:49:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.571 09:49:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.144 [2024-11-27 09:49:12.031278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:11.144 "name": "Existed_Raid", 00:12:11.144 "aliases": [ 00:12:11.144 "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe" 00:12:11.144 ], 00:12:11.144 "product_name": "Raid Volume", 00:12:11.144 "block_size": 512, 00:12:11.144 "num_blocks": 253952, 00:12:11.144 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:11.144 "assigned_rate_limits": { 00:12:11.144 "rw_ios_per_sec": 0, 00:12:11.144 "rw_mbytes_per_sec": 0, 00:12:11.144 "r_mbytes_per_sec": 0, 00:12:11.144 "w_mbytes_per_sec": 0 00:12:11.144 }, 00:12:11.144 "claimed": false, 00:12:11.144 "zoned": false, 00:12:11.144 "supported_io_types": { 00:12:11.144 "read": true, 00:12:11.144 "write": true, 00:12:11.144 "unmap": true, 00:12:11.144 "flush": true, 00:12:11.144 "reset": true, 00:12:11.144 "nvme_admin": false, 00:12:11.144 "nvme_io": false, 00:12:11.144 "nvme_io_md": false, 00:12:11.144 "write_zeroes": true, 00:12:11.144 "zcopy": false, 00:12:11.144 "get_zone_info": false, 00:12:11.144 "zone_management": false, 00:12:11.144 "zone_append": false, 00:12:11.144 "compare": false, 00:12:11.144 "compare_and_write": false, 00:12:11.144 "abort": false, 00:12:11.144 "seek_hole": false, 00:12:11.144 "seek_data": false, 00:12:11.144 "copy": false, 00:12:11.144 "nvme_iov_md": false 00:12:11.144 }, 00:12:11.144 "memory_domains": [ 00:12:11.144 { 00:12:11.144 "dma_device_id": "system", 00:12:11.144 "dma_device_type": 1 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.144 "dma_device_type": 2 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "system", 00:12:11.144 "dma_device_type": 1 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.144 "dma_device_type": 2 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "system", 00:12:11.144 "dma_device_type": 1 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.144 "dma_device_type": 2 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "system", 00:12:11.144 "dma_device_type": 1 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.144 "dma_device_type": 2 00:12:11.144 } 00:12:11.144 ], 00:12:11.144 "driver_specific": { 00:12:11.144 "raid": { 00:12:11.144 "uuid": "96760ad5-b1a3-4d8c-b0e6-2bb46f9cdffe", 00:12:11.144 "strip_size_kb": 64, 00:12:11.144 "state": "online", 00:12:11.144 "raid_level": "raid0", 00:12:11.144 "superblock": true, 00:12:11.144 "num_base_bdevs": 4, 00:12:11.144 "num_base_bdevs_discovered": 4, 00:12:11.144 "num_base_bdevs_operational": 4, 00:12:11.144 "base_bdevs_list": [ 00:12:11.144 { 00:12:11.144 "name": "NewBaseBdev", 00:12:11.144 "uuid": "cce9cfc8-a45c-46f1-9924-ae7a36fc5022", 00:12:11.144 "is_configured": true, 00:12:11.144 "data_offset": 2048, 00:12:11.144 "data_size": 63488 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "name": "BaseBdev2", 00:12:11.144 "uuid": "04a8ccc1-ad98-4be7-882f-3ed9fe1a16de", 00:12:11.144 "is_configured": true, 00:12:11.144 "data_offset": 2048, 00:12:11.144 "data_size": 63488 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "name": "BaseBdev3", 00:12:11.144 "uuid": "deb7181a-51d8-4cf3-bb8e-3668fcaf0ecb", 00:12:11.144 "is_configured": true, 00:12:11.144 "data_offset": 2048, 00:12:11.144 "data_size": 63488 00:12:11.144 }, 00:12:11.144 { 00:12:11.144 "name": "BaseBdev4", 00:12:11.144 "uuid": "002545ea-29ad-4ebd-ad79-7adb964b6d2e", 00:12:11.144 "is_configured": true, 00:12:11.144 "data_offset": 2048, 00:12:11.144 "data_size": 63488 00:12:11.144 } 00:12:11.144 ] 00:12:11.144 } 00:12:11.144 } 00:12:11.144 }' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:11.144 BaseBdev2 00:12:11.144 BaseBdev3 00:12:11.144 BaseBdev4' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.144 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.404 [2024-11-27 09:49:12.338315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.404 [2024-11-27 09:49:12.338424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.404 [2024-11-27 09:49:12.338553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.404 [2024-11-27 09:49:12.338646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.404 [2024-11-27 09:49:12.338658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70312 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70312 ']' 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70312 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70312 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.404 killing process with pid 70312 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70312' 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70312 00:12:11.404 [2024-11-27 09:49:12.386443] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.404 09:49:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70312 00:12:11.973 [2024-11-27 09:49:12.829301] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.354 09:49:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:13.354 00:12:13.354 real 0m12.134s 00:12:13.354 user 0m18.923s 00:12:13.354 sys 0m2.376s 00:12:13.354 09:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.354 ************************************ 00:12:13.354 END TEST raid_state_function_test_sb 00:12:13.354 ************************************ 00:12:13.354 09:49:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.354 09:49:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:13.354 09:49:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:13.354 09:49:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.354 09:49:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.354 ************************************ 00:12:13.354 START TEST raid_superblock_test 00:12:13.354 ************************************ 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70990 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70990 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70990 ']' 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.354 09:49:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.354 [2024-11-27 09:49:14.251986] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:13.354 [2024-11-27 09:49:14.252264] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70990 ] 00:12:13.354 [2024-11-27 09:49:14.438074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.614 [2024-11-27 09:49:14.581881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.874 [2024-11-27 09:49:14.830797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.874 [2024-11-27 09:49:14.831046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.134 malloc1 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.134 [2024-11-27 09:49:15.174152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:14.134 [2024-11-27 09:49:15.174315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.134 [2024-11-27 09:49:15.174347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:14.134 [2024-11-27 09:49:15.174358] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.134 [2024-11-27 09:49:15.176985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.134 [2024-11-27 09:49:15.177049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:14.134 pt1 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.134 malloc2 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.134 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.135 [2024-11-27 09:49:15.236656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.135 [2024-11-27 09:49:15.236829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.135 [2024-11-27 09:49:15.236888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:14.135 [2024-11-27 09:49:15.236929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.135 [2024-11-27 09:49:15.239665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.135 [2024-11-27 09:49:15.239764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.135 pt2 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.135 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.395 malloc3 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.395 [2024-11-27 09:49:15.315502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.395 [2024-11-27 09:49:15.315650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.395 [2024-11-27 09:49:15.315698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:14.395 [2024-11-27 09:49:15.315732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.395 [2024-11-27 09:49:15.318443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.395 [2024-11-27 09:49:15.318530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.395 pt3 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.395 malloc4 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.395 [2024-11-27 09:49:15.384186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:14.395 [2024-11-27 09:49:15.384275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.395 [2024-11-27 09:49:15.384303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:14.395 [2024-11-27 09:49:15.384313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.395 [2024-11-27 09:49:15.387211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.395 [2024-11-27 09:49:15.387260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:14.395 pt4 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.395 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.395 [2024-11-27 09:49:15.396326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:14.395 [2024-11-27 09:49:15.398685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.395 [2024-11-27 09:49:15.398860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.396 [2024-11-27 09:49:15.398925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:14.396 [2024-11-27 09:49:15.399156] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:14.396 [2024-11-27 09:49:15.399169] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:14.396 [2024-11-27 09:49:15.399479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:14.396 [2024-11-27 09:49:15.399679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:14.396 [2024-11-27 09:49:15.399693] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:14.396 [2024-11-27 09:49:15.399886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.396 "name": "raid_bdev1", 00:12:14.396 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:14.396 "strip_size_kb": 64, 00:12:14.396 "state": "online", 00:12:14.396 "raid_level": "raid0", 00:12:14.396 "superblock": true, 00:12:14.396 "num_base_bdevs": 4, 00:12:14.396 "num_base_bdevs_discovered": 4, 00:12:14.396 "num_base_bdevs_operational": 4, 00:12:14.396 "base_bdevs_list": [ 00:12:14.396 { 00:12:14.396 "name": "pt1", 00:12:14.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.396 "is_configured": true, 00:12:14.396 "data_offset": 2048, 00:12:14.396 "data_size": 63488 00:12:14.396 }, 00:12:14.396 { 00:12:14.396 "name": "pt2", 00:12:14.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.396 "is_configured": true, 00:12:14.396 "data_offset": 2048, 00:12:14.396 "data_size": 63488 00:12:14.396 }, 00:12:14.396 { 00:12:14.396 "name": "pt3", 00:12:14.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.396 "is_configured": true, 00:12:14.396 "data_offset": 2048, 00:12:14.396 "data_size": 63488 00:12:14.396 }, 00:12:14.396 { 00:12:14.396 "name": "pt4", 00:12:14.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.396 "is_configured": true, 00:12:14.396 "data_offset": 2048, 00:12:14.396 "data_size": 63488 00:12:14.396 } 00:12:14.396 ] 00:12:14.396 }' 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.396 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.661 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.661 [2024-11-27 09:49:15.783970] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.927 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.927 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.927 "name": "raid_bdev1", 00:12:14.927 "aliases": [ 00:12:14.927 "beb5415c-fb53-48ef-ad9e-d1df6861865b" 00:12:14.927 ], 00:12:14.927 "product_name": "Raid Volume", 00:12:14.927 "block_size": 512, 00:12:14.927 "num_blocks": 253952, 00:12:14.927 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:14.927 "assigned_rate_limits": { 00:12:14.927 "rw_ios_per_sec": 0, 00:12:14.927 "rw_mbytes_per_sec": 0, 00:12:14.927 "r_mbytes_per_sec": 0, 00:12:14.927 "w_mbytes_per_sec": 0 00:12:14.927 }, 00:12:14.927 "claimed": false, 00:12:14.927 "zoned": false, 00:12:14.927 "supported_io_types": { 00:12:14.927 "read": true, 00:12:14.927 "write": true, 00:12:14.927 "unmap": true, 00:12:14.927 "flush": true, 00:12:14.927 "reset": true, 00:12:14.927 "nvme_admin": false, 00:12:14.927 "nvme_io": false, 00:12:14.927 "nvme_io_md": false, 00:12:14.927 "write_zeroes": true, 00:12:14.927 "zcopy": false, 00:12:14.927 "get_zone_info": false, 00:12:14.927 "zone_management": false, 00:12:14.927 "zone_append": false, 00:12:14.927 "compare": false, 00:12:14.927 "compare_and_write": false, 00:12:14.927 "abort": false, 00:12:14.927 "seek_hole": false, 00:12:14.927 "seek_data": false, 00:12:14.927 "copy": false, 00:12:14.927 "nvme_iov_md": false 00:12:14.927 }, 00:12:14.928 "memory_domains": [ 00:12:14.928 { 00:12:14.928 "dma_device_id": "system", 00:12:14.928 "dma_device_type": 1 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.928 "dma_device_type": 2 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "system", 00:12:14.928 "dma_device_type": 1 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.928 "dma_device_type": 2 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "system", 00:12:14.928 "dma_device_type": 1 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.928 "dma_device_type": 2 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "system", 00:12:14.928 "dma_device_type": 1 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.928 "dma_device_type": 2 00:12:14.928 } 00:12:14.928 ], 00:12:14.928 "driver_specific": { 00:12:14.928 "raid": { 00:12:14.928 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:14.928 "strip_size_kb": 64, 00:12:14.928 "state": "online", 00:12:14.928 "raid_level": "raid0", 00:12:14.928 "superblock": true, 00:12:14.928 "num_base_bdevs": 4, 00:12:14.928 "num_base_bdevs_discovered": 4, 00:12:14.928 "num_base_bdevs_operational": 4, 00:12:14.928 "base_bdevs_list": [ 00:12:14.928 { 00:12:14.928 "name": "pt1", 00:12:14.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.928 "is_configured": true, 00:12:14.928 "data_offset": 2048, 00:12:14.928 "data_size": 63488 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "name": "pt2", 00:12:14.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.928 "is_configured": true, 00:12:14.928 "data_offset": 2048, 00:12:14.928 "data_size": 63488 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "name": "pt3", 00:12:14.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.928 "is_configured": true, 00:12:14.928 "data_offset": 2048, 00:12:14.928 "data_size": 63488 00:12:14.928 }, 00:12:14.928 { 00:12:14.928 "name": "pt4", 00:12:14.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.928 "is_configured": true, 00:12:14.928 "data_offset": 2048, 00:12:14.928 "data_size": 63488 00:12:14.928 } 00:12:14.928 ] 00:12:14.928 } 00:12:14.928 } 00:12:14.928 }' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:14.928 pt2 00:12:14.928 pt3 00:12:14.928 pt4' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.928 09:49:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.928 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 [2024-11-27 09:49:16.095362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=beb5415c-fb53-48ef-ad9e-d1df6861865b 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z beb5415c-fb53-48ef-ad9e-d1df6861865b ']' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 [2024-11-27 09:49:16.135008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.189 [2024-11-27 09:49:16.135076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.189 [2024-11-27 09:49:16.135202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.189 [2024-11-27 09:49:16.135305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.189 [2024-11-27 09:49:16.135355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.189 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.189 [2024-11-27 09:49:16.302719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:15.189 [2024-11-27 09:49:16.304948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:15.189 [2024-11-27 09:49:16.305057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:15.189 [2024-11-27 09:49:16.305124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:15.189 [2024-11-27 09:49:16.305209] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:15.189 [2024-11-27 09:49:16.305302] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:15.189 [2024-11-27 09:49:16.305363] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:15.189 [2024-11-27 09:49:16.305427] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:15.189 [2024-11-27 09:49:16.305470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.190 [2024-11-27 09:49:16.305507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:15.190 request: 00:12:15.190 { 00:12:15.190 "name": "raid_bdev1", 00:12:15.190 "raid_level": "raid0", 00:12:15.190 "base_bdevs": [ 00:12:15.190 "malloc1", 00:12:15.190 "malloc2", 00:12:15.190 "malloc3", 00:12:15.190 "malloc4" 00:12:15.190 ], 00:12:15.190 "strip_size_kb": 64, 00:12:15.190 "superblock": false, 00:12:15.190 "method": "bdev_raid_create", 00:12:15.190 "req_id": 1 00:12:15.190 } 00:12:15.190 Got JSON-RPC error response 00:12:15.190 response: 00:12:15.190 { 00:12:15.190 "code": -17, 00:12:15.190 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:15.190 } 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.190 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 [2024-11-27 09:49:16.370568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:15.451 [2024-11-27 09:49:16.370622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.451 [2024-11-27 09:49:16.370643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:15.451 [2024-11-27 09:49:16.370655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.451 [2024-11-27 09:49:16.373283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.451 [2024-11-27 09:49:16.373326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:15.451 [2024-11-27 09:49:16.373421] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:15.451 [2024-11-27 09:49:16.373479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:15.451 pt1 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.451 "name": "raid_bdev1", 00:12:15.451 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:15.451 "strip_size_kb": 64, 00:12:15.451 "state": "configuring", 00:12:15.451 "raid_level": "raid0", 00:12:15.451 "superblock": true, 00:12:15.451 "num_base_bdevs": 4, 00:12:15.451 "num_base_bdevs_discovered": 1, 00:12:15.451 "num_base_bdevs_operational": 4, 00:12:15.451 "base_bdevs_list": [ 00:12:15.451 { 00:12:15.451 "name": "pt1", 00:12:15.451 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.451 "is_configured": true, 00:12:15.451 "data_offset": 2048, 00:12:15.451 "data_size": 63488 00:12:15.451 }, 00:12:15.451 { 00:12:15.451 "name": null, 00:12:15.451 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.451 "is_configured": false, 00:12:15.451 "data_offset": 2048, 00:12:15.451 "data_size": 63488 00:12:15.451 }, 00:12:15.451 { 00:12:15.451 "name": null, 00:12:15.451 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.451 "is_configured": false, 00:12:15.451 "data_offset": 2048, 00:12:15.451 "data_size": 63488 00:12:15.451 }, 00:12:15.451 { 00:12:15.451 "name": null, 00:12:15.451 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.451 "is_configured": false, 00:12:15.451 "data_offset": 2048, 00:12:15.451 "data_size": 63488 00:12:15.451 } 00:12:15.451 ] 00:12:15.451 }' 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.451 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.712 [2024-11-27 09:49:16.797905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.712 [2024-11-27 09:49:16.798067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.712 [2024-11-27 09:49:16.798124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:15.712 [2024-11-27 09:49:16.798162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.712 [2024-11-27 09:49:16.798751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.712 [2024-11-27 09:49:16.798822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.712 [2024-11-27 09:49:16.798976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:15.712 [2024-11-27 09:49:16.799058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.712 pt2 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.712 [2024-11-27 09:49:16.809863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.712 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.713 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.973 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.973 "name": "raid_bdev1", 00:12:15.973 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:15.973 "strip_size_kb": 64, 00:12:15.973 "state": "configuring", 00:12:15.973 "raid_level": "raid0", 00:12:15.973 "superblock": true, 00:12:15.973 "num_base_bdevs": 4, 00:12:15.973 "num_base_bdevs_discovered": 1, 00:12:15.973 "num_base_bdevs_operational": 4, 00:12:15.973 "base_bdevs_list": [ 00:12:15.973 { 00:12:15.973 "name": "pt1", 00:12:15.973 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.973 "is_configured": true, 00:12:15.973 "data_offset": 2048, 00:12:15.973 "data_size": 63488 00:12:15.973 }, 00:12:15.973 { 00:12:15.973 "name": null, 00:12:15.973 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.973 "is_configured": false, 00:12:15.973 "data_offset": 0, 00:12:15.973 "data_size": 63488 00:12:15.973 }, 00:12:15.973 { 00:12:15.973 "name": null, 00:12:15.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.973 "is_configured": false, 00:12:15.973 "data_offset": 2048, 00:12:15.973 "data_size": 63488 00:12:15.973 }, 00:12:15.973 { 00:12:15.973 "name": null, 00:12:15.973 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.973 "is_configured": false, 00:12:15.973 "data_offset": 2048, 00:12:15.973 "data_size": 63488 00:12:15.973 } 00:12:15.973 ] 00:12:15.973 }' 00:12:15.973 09:49:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.973 09:49:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.234 [2024-11-27 09:49:17.257116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:16.234 [2024-11-27 09:49:17.257251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.234 [2024-11-27 09:49:17.257298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:16.234 [2024-11-27 09:49:17.257335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.234 [2024-11-27 09:49:17.257935] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.234 [2024-11-27 09:49:17.258011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:16.234 [2024-11-27 09:49:17.258153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:16.234 [2024-11-27 09:49:17.258213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:16.234 pt2 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.234 [2024-11-27 09:49:17.269051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:16.234 [2024-11-27 09:49:17.269148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.234 [2024-11-27 09:49:17.269197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:16.234 [2024-11-27 09:49:17.269230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.234 [2024-11-27 09:49:17.269730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.234 [2024-11-27 09:49:17.269791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:16.234 [2024-11-27 09:49:17.269904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:16.234 [2024-11-27 09:49:17.269966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:16.234 pt3 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:16.234 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.235 [2024-11-27 09:49:17.280972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:16.235 [2024-11-27 09:49:17.281076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.235 [2024-11-27 09:49:17.281110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:16.235 [2024-11-27 09:49:17.281137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.235 [2024-11-27 09:49:17.281547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.235 [2024-11-27 09:49:17.281601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:16.235 [2024-11-27 09:49:17.281691] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:16.235 [2024-11-27 09:49:17.281740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:16.235 [2024-11-27 09:49:17.281894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:16.235 [2024-11-27 09:49:17.281931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:16.235 [2024-11-27 09:49:17.282231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:16.235 [2024-11-27 09:49:17.282388] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:16.235 [2024-11-27 09:49:17.282401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:16.235 [2024-11-27 09:49:17.282533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.235 pt4 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.235 "name": "raid_bdev1", 00:12:16.235 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:16.235 "strip_size_kb": 64, 00:12:16.235 "state": "online", 00:12:16.235 "raid_level": "raid0", 00:12:16.235 "superblock": true, 00:12:16.235 "num_base_bdevs": 4, 00:12:16.235 "num_base_bdevs_discovered": 4, 00:12:16.235 "num_base_bdevs_operational": 4, 00:12:16.235 "base_bdevs_list": [ 00:12:16.235 { 00:12:16.235 "name": "pt1", 00:12:16.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.235 "is_configured": true, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 }, 00:12:16.235 { 00:12:16.235 "name": "pt2", 00:12:16.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.235 "is_configured": true, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 }, 00:12:16.235 { 00:12:16.235 "name": "pt3", 00:12:16.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.235 "is_configured": true, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 }, 00:12:16.235 { 00:12:16.235 "name": "pt4", 00:12:16.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.235 "is_configured": true, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 } 00:12:16.235 ] 00:12:16.235 }' 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.235 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 [2024-11-27 09:49:17.732662] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.806 "name": "raid_bdev1", 00:12:16.806 "aliases": [ 00:12:16.806 "beb5415c-fb53-48ef-ad9e-d1df6861865b" 00:12:16.806 ], 00:12:16.806 "product_name": "Raid Volume", 00:12:16.806 "block_size": 512, 00:12:16.806 "num_blocks": 253952, 00:12:16.806 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:16.806 "assigned_rate_limits": { 00:12:16.806 "rw_ios_per_sec": 0, 00:12:16.806 "rw_mbytes_per_sec": 0, 00:12:16.806 "r_mbytes_per_sec": 0, 00:12:16.806 "w_mbytes_per_sec": 0 00:12:16.806 }, 00:12:16.806 "claimed": false, 00:12:16.806 "zoned": false, 00:12:16.806 "supported_io_types": { 00:12:16.806 "read": true, 00:12:16.806 "write": true, 00:12:16.806 "unmap": true, 00:12:16.806 "flush": true, 00:12:16.806 "reset": true, 00:12:16.806 "nvme_admin": false, 00:12:16.806 "nvme_io": false, 00:12:16.806 "nvme_io_md": false, 00:12:16.806 "write_zeroes": true, 00:12:16.806 "zcopy": false, 00:12:16.806 "get_zone_info": false, 00:12:16.806 "zone_management": false, 00:12:16.806 "zone_append": false, 00:12:16.806 "compare": false, 00:12:16.806 "compare_and_write": false, 00:12:16.806 "abort": false, 00:12:16.806 "seek_hole": false, 00:12:16.806 "seek_data": false, 00:12:16.806 "copy": false, 00:12:16.806 "nvme_iov_md": false 00:12:16.806 }, 00:12:16.806 "memory_domains": [ 00:12:16.806 { 00:12:16.806 "dma_device_id": "system", 00:12:16.806 "dma_device_type": 1 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.806 "dma_device_type": 2 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "system", 00:12:16.806 "dma_device_type": 1 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.806 "dma_device_type": 2 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "system", 00:12:16.806 "dma_device_type": 1 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.806 "dma_device_type": 2 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "system", 00:12:16.806 "dma_device_type": 1 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.806 "dma_device_type": 2 00:12:16.806 } 00:12:16.806 ], 00:12:16.806 "driver_specific": { 00:12:16.806 "raid": { 00:12:16.806 "uuid": "beb5415c-fb53-48ef-ad9e-d1df6861865b", 00:12:16.806 "strip_size_kb": 64, 00:12:16.806 "state": "online", 00:12:16.806 "raid_level": "raid0", 00:12:16.806 "superblock": true, 00:12:16.806 "num_base_bdevs": 4, 00:12:16.806 "num_base_bdevs_discovered": 4, 00:12:16.806 "num_base_bdevs_operational": 4, 00:12:16.806 "base_bdevs_list": [ 00:12:16.806 { 00:12:16.806 "name": "pt1", 00:12:16.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:16.806 "is_configured": true, 00:12:16.806 "data_offset": 2048, 00:12:16.806 "data_size": 63488 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "name": "pt2", 00:12:16.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.806 "is_configured": true, 00:12:16.806 "data_offset": 2048, 00:12:16.806 "data_size": 63488 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "name": "pt3", 00:12:16.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.806 "is_configured": true, 00:12:16.806 "data_offset": 2048, 00:12:16.806 "data_size": 63488 00:12:16.806 }, 00:12:16.806 { 00:12:16.806 "name": "pt4", 00:12:16.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.806 "is_configured": true, 00:12:16.806 "data_offset": 2048, 00:12:16.806 "data_size": 63488 00:12:16.806 } 00:12:16.806 ] 00:12:16.806 } 00:12:16.806 } 00:12:16.806 }' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:16.806 pt2 00:12:16.806 pt3 00:12:16.806 pt4' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.806 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.807 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.807 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:16.807 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.807 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.807 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.067 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.067 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.068 09:49:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:17.068 [2024-11-27 09:49:18.076121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' beb5415c-fb53-48ef-ad9e-d1df6861865b '!=' beb5415c-fb53-48ef-ad9e-d1df6861865b ']' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70990 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70990 ']' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70990 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70990 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70990' 00:12:17.068 killing process with pid 70990 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70990 00:12:17.068 [2024-11-27 09:49:18.165132] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.068 09:49:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70990 00:12:17.068 [2024-11-27 09:49:18.165311] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.068 [2024-11-27 09:49:18.165401] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.068 [2024-11-27 09:49:18.165459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:17.639 [2024-11-27 09:49:18.602312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.030 09:49:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:19.030 00:12:19.030 real 0m5.717s 00:12:19.030 user 0m7.869s 00:12:19.030 sys 0m1.111s 00:12:19.030 09:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.030 09:49:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.030 ************************************ 00:12:19.030 END TEST raid_superblock_test 00:12:19.030 ************************************ 00:12:19.030 09:49:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:19.030 09:49:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:19.030 09:49:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.030 09:49:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.030 ************************************ 00:12:19.030 START TEST raid_read_error_test 00:12:19.030 ************************************ 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5YrtmAzTqE 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71251 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71251 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71251 ']' 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.030 09:49:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.030 [2024-11-27 09:49:20.058957] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:19.030 [2024-11-27 09:49:20.059191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71251 ] 00:12:19.291 [2024-11-27 09:49:20.244817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.291 [2024-11-27 09:49:20.386968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.551 [2024-11-27 09:49:20.629717] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.551 [2024-11-27 09:49:20.629797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.811 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.811 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:19.811 09:49:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.811 09:49:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.811 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.811 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.070 BaseBdev1_malloc 00:12:20.070 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.070 09:49:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:20.070 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.070 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.070 true 00:12:20.070 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 [2024-11-27 09:49:20.977720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:20.071 [2024-11-27 09:49:20.977788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.071 [2024-11-27 09:49:20.977812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:20.071 [2024-11-27 09:49:20.977825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.071 [2024-11-27 09:49:20.980356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.071 [2024-11-27 09:49:20.980465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.071 BaseBdev1 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 BaseBdev2_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 true 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 [2024-11-27 09:49:21.053844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:20.071 [2024-11-27 09:49:21.053909] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.071 [2024-11-27 09:49:21.053929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:20.071 [2024-11-27 09:49:21.053942] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.071 [2024-11-27 09:49:21.056611] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.071 [2024-11-27 09:49:21.056656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.071 BaseBdev2 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 BaseBdev3_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 true 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 [2024-11-27 09:49:21.140214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:20.071 [2024-11-27 09:49:21.140271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.071 [2024-11-27 09:49:21.140293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:20.071 [2024-11-27 09:49:21.140305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.071 [2024-11-27 09:49:21.142907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.071 [2024-11-27 09:49:21.142949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:20.071 BaseBdev3 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.071 BaseBdev4_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.071 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.332 true 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.332 [2024-11-27 09:49:21.215200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:20.332 [2024-11-27 09:49:21.215259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.332 [2024-11-27 09:49:21.215279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:20.332 [2024-11-27 09:49:21.215291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.332 [2024-11-27 09:49:21.217755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.332 [2024-11-27 09:49:21.217798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:20.332 BaseBdev4 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.332 [2024-11-27 09:49:21.227257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.332 [2024-11-27 09:49:21.229474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.332 [2024-11-27 09:49:21.229553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.332 [2024-11-27 09:49:21.229616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.332 [2024-11-27 09:49:21.229847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:20.332 [2024-11-27 09:49:21.229863] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:20.332 [2024-11-27 09:49:21.230142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:20.332 [2024-11-27 09:49:21.230335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:20.332 [2024-11-27 09:49:21.230353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:20.332 [2024-11-27 09:49:21.230527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.332 "name": "raid_bdev1", 00:12:20.332 "uuid": "8c5f0c22-43e8-4698-a0dd-977f1730aac6", 00:12:20.332 "strip_size_kb": 64, 00:12:20.332 "state": "online", 00:12:20.332 "raid_level": "raid0", 00:12:20.332 "superblock": true, 00:12:20.332 "num_base_bdevs": 4, 00:12:20.332 "num_base_bdevs_discovered": 4, 00:12:20.332 "num_base_bdevs_operational": 4, 00:12:20.332 "base_bdevs_list": [ 00:12:20.332 { 00:12:20.332 "name": "BaseBdev1", 00:12:20.332 "uuid": "b792c09e-fd45-5ed0-928c-979c413e5226", 00:12:20.332 "is_configured": true, 00:12:20.332 "data_offset": 2048, 00:12:20.332 "data_size": 63488 00:12:20.332 }, 00:12:20.332 { 00:12:20.332 "name": "BaseBdev2", 00:12:20.332 "uuid": "e87192d1-8b1a-5522-b363-356400b14d60", 00:12:20.332 "is_configured": true, 00:12:20.332 "data_offset": 2048, 00:12:20.332 "data_size": 63488 00:12:20.332 }, 00:12:20.332 { 00:12:20.332 "name": "BaseBdev3", 00:12:20.332 "uuid": "f9b18b85-9bd5-5ec0-b449-05e163a32d14", 00:12:20.332 "is_configured": true, 00:12:20.332 "data_offset": 2048, 00:12:20.332 "data_size": 63488 00:12:20.332 }, 00:12:20.332 { 00:12:20.332 "name": "BaseBdev4", 00:12:20.332 "uuid": "5b0ddbbd-cc54-5cd7-953c-82bfd210a6c5", 00:12:20.332 "is_configured": true, 00:12:20.332 "data_offset": 2048, 00:12:20.332 "data_size": 63488 00:12:20.332 } 00:12:20.332 ] 00:12:20.332 }' 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.332 09:49:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:20.592 09:49:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:20.592 [2024-11-27 09:49:21.700050] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.537 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.798 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.798 "name": "raid_bdev1", 00:12:21.798 "uuid": "8c5f0c22-43e8-4698-a0dd-977f1730aac6", 00:12:21.798 "strip_size_kb": 64, 00:12:21.798 "state": "online", 00:12:21.798 "raid_level": "raid0", 00:12:21.798 "superblock": true, 00:12:21.798 "num_base_bdevs": 4, 00:12:21.798 "num_base_bdevs_discovered": 4, 00:12:21.798 "num_base_bdevs_operational": 4, 00:12:21.798 "base_bdevs_list": [ 00:12:21.798 { 00:12:21.798 "name": "BaseBdev1", 00:12:21.798 "uuid": "b792c09e-fd45-5ed0-928c-979c413e5226", 00:12:21.798 "is_configured": true, 00:12:21.798 "data_offset": 2048, 00:12:21.798 "data_size": 63488 00:12:21.798 }, 00:12:21.798 { 00:12:21.798 "name": "BaseBdev2", 00:12:21.798 "uuid": "e87192d1-8b1a-5522-b363-356400b14d60", 00:12:21.798 "is_configured": true, 00:12:21.798 "data_offset": 2048, 00:12:21.798 "data_size": 63488 00:12:21.798 }, 00:12:21.798 { 00:12:21.798 "name": "BaseBdev3", 00:12:21.798 "uuid": "f9b18b85-9bd5-5ec0-b449-05e163a32d14", 00:12:21.798 "is_configured": true, 00:12:21.798 "data_offset": 2048, 00:12:21.798 "data_size": 63488 00:12:21.798 }, 00:12:21.798 { 00:12:21.798 "name": "BaseBdev4", 00:12:21.798 "uuid": "5b0ddbbd-cc54-5cd7-953c-82bfd210a6c5", 00:12:21.798 "is_configured": true, 00:12:21.798 "data_offset": 2048, 00:12:21.798 "data_size": 63488 00:12:21.798 } 00:12:21.798 ] 00:12:21.798 }' 00:12:21.798 09:49:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.798 09:49:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.059 [2024-11-27 09:49:23.093515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:22.059 [2024-11-27 09:49:23.093614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:22.059 [2024-11-27 09:49:23.096615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.059 [2024-11-27 09:49:23.096734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.059 [2024-11-27 09:49:23.096807] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.059 [2024-11-27 09:49:23.096857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:22.059 { 00:12:22.059 "results": [ 00:12:22.059 { 00:12:22.059 "job": "raid_bdev1", 00:12:22.059 "core_mask": "0x1", 00:12:22.059 "workload": "randrw", 00:12:22.059 "percentage": 50, 00:12:22.059 "status": "finished", 00:12:22.059 "queue_depth": 1, 00:12:22.059 "io_size": 131072, 00:12:22.059 "runtime": 1.394094, 00:12:22.059 "iops": 13009.88312122425, 00:12:22.059 "mibps": 1626.2353901530312, 00:12:22.059 "io_failed": 1, 00:12:22.059 "io_timeout": 0, 00:12:22.059 "avg_latency_us": 108.02141177705519, 00:12:22.059 "min_latency_us": 27.053275109170304, 00:12:22.059 "max_latency_us": 1488.1537117903931 00:12:22.059 } 00:12:22.059 ], 00:12:22.059 "core_count": 1 00:12:22.059 } 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71251 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71251 ']' 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71251 00:12:22.059 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71251 00:12:22.060 killing process with pid 71251 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71251' 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71251 00:12:22.060 [2024-11-27 09:49:23.141402] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.060 09:49:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71251 00:12:22.629 [2024-11-27 09:49:23.504547] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5YrtmAzTqE 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:24.072 00:12:24.072 real 0m4.892s 00:12:24.072 user 0m5.566s 00:12:24.072 sys 0m0.729s 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.072 09:49:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.072 ************************************ 00:12:24.072 END TEST raid_read_error_test 00:12:24.072 ************************************ 00:12:24.072 09:49:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:24.072 09:49:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:24.072 09:49:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.072 09:49:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.072 ************************************ 00:12:24.072 START TEST raid_write_error_test 00:12:24.072 ************************************ 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:24.072 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.j7Gwm8avOM 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71402 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71402 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71402 ']' 00:12:24.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.073 09:49:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.073 [2024-11-27 09:49:25.026453] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:24.073 [2024-11-27 09:49:25.026600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71402 ] 00:12:24.332 [2024-11-27 09:49:25.206456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.332 [2024-11-27 09:49:25.345513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.592 [2024-11-27 09:49:25.584471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.592 [2024-11-27 09:49:25.584518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 BaseBdev1_malloc 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 true 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.853 [2024-11-27 09:49:25.937431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:24.853 [2024-11-27 09:49:25.937535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.853 [2024-11-27 09:49:25.937579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:24.853 [2024-11-27 09:49:25.937591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.853 [2024-11-27 09:49:25.939979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.853 [2024-11-27 09:49:25.940031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.853 BaseBdev1 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.853 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 BaseBdev2_malloc 00:12:25.114 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:25.114 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 true 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 [2024-11-27 09:49:26.009892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:25.114 [2024-11-27 09:49:26.009963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.114 [2024-11-27 09:49:26.009996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:25.114 [2024-11-27 09:49:26.010008] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.114 [2024-11-27 09:49:26.012378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.114 [2024-11-27 09:49:26.012419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.114 BaseBdev2 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 BaseBdev3_malloc 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 true 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 [2024-11-27 09:49:26.114815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:25.114 [2024-11-27 09:49:26.114873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.114 [2024-11-27 09:49:26.114891] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:25.114 [2024-11-27 09:49:26.114903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.114 [2024-11-27 09:49:26.117274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.114 [2024-11-27 09:49:26.117314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.114 BaseBdev3 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 BaseBdev4_malloc 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 true 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.114 [2024-11-27 09:49:26.188055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:25.114 [2024-11-27 09:49:26.188177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.114 [2024-11-27 09:49:26.188209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:25.114 [2024-11-27 09:49:26.188222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.114 [2024-11-27 09:49:26.190672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.114 [2024-11-27 09:49:26.190714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:25.114 BaseBdev4 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.114 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.115 [2024-11-27 09:49:26.200114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.115 [2024-11-27 09:49:26.202249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.115 [2024-11-27 09:49:26.202325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.115 [2024-11-27 09:49:26.202385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.115 [2024-11-27 09:49:26.202606] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:25.115 [2024-11-27 09:49:26.202622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:25.115 [2024-11-27 09:49:26.202866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:25.115 [2024-11-27 09:49:26.203051] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:25.115 [2024-11-27 09:49:26.203064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:25.115 [2024-11-27 09:49:26.203206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.115 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.375 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.375 "name": "raid_bdev1", 00:12:25.375 "uuid": "5acca7ef-2103-4dde-9ba7-74ca528060e8", 00:12:25.375 "strip_size_kb": 64, 00:12:25.375 "state": "online", 00:12:25.375 "raid_level": "raid0", 00:12:25.375 "superblock": true, 00:12:25.375 "num_base_bdevs": 4, 00:12:25.375 "num_base_bdevs_discovered": 4, 00:12:25.375 "num_base_bdevs_operational": 4, 00:12:25.375 "base_bdevs_list": [ 00:12:25.375 { 00:12:25.375 "name": "BaseBdev1", 00:12:25.375 "uuid": "f0729893-bb19-5145-aa33-1942bb84f86a", 00:12:25.375 "is_configured": true, 00:12:25.375 "data_offset": 2048, 00:12:25.375 "data_size": 63488 00:12:25.375 }, 00:12:25.375 { 00:12:25.375 "name": "BaseBdev2", 00:12:25.375 "uuid": "f01bcaef-a681-59ee-85df-59b79b6ece64", 00:12:25.375 "is_configured": true, 00:12:25.375 "data_offset": 2048, 00:12:25.375 "data_size": 63488 00:12:25.375 }, 00:12:25.375 { 00:12:25.375 "name": "BaseBdev3", 00:12:25.375 "uuid": "7c8d1d6e-a07c-5ce8-9ecf-1d44ed0c786e", 00:12:25.375 "is_configured": true, 00:12:25.375 "data_offset": 2048, 00:12:25.375 "data_size": 63488 00:12:25.375 }, 00:12:25.375 { 00:12:25.375 "name": "BaseBdev4", 00:12:25.375 "uuid": "9c978172-9986-582f-8194-22e736a09053", 00:12:25.375 "is_configured": true, 00:12:25.375 "data_offset": 2048, 00:12:25.375 "data_size": 63488 00:12:25.375 } 00:12:25.375 ] 00:12:25.375 }' 00:12:25.375 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.375 09:49:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.635 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:25.635 09:49:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:25.635 [2024-11-27 09:49:26.696600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.577 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.577 "name": "raid_bdev1", 00:12:26.577 "uuid": "5acca7ef-2103-4dde-9ba7-74ca528060e8", 00:12:26.577 "strip_size_kb": 64, 00:12:26.577 "state": "online", 00:12:26.578 "raid_level": "raid0", 00:12:26.578 "superblock": true, 00:12:26.578 "num_base_bdevs": 4, 00:12:26.578 "num_base_bdevs_discovered": 4, 00:12:26.578 "num_base_bdevs_operational": 4, 00:12:26.578 "base_bdevs_list": [ 00:12:26.578 { 00:12:26.578 "name": "BaseBdev1", 00:12:26.578 "uuid": "f0729893-bb19-5145-aa33-1942bb84f86a", 00:12:26.578 "is_configured": true, 00:12:26.578 "data_offset": 2048, 00:12:26.578 "data_size": 63488 00:12:26.578 }, 00:12:26.578 { 00:12:26.578 "name": "BaseBdev2", 00:12:26.578 "uuid": "f01bcaef-a681-59ee-85df-59b79b6ece64", 00:12:26.578 "is_configured": true, 00:12:26.578 "data_offset": 2048, 00:12:26.578 "data_size": 63488 00:12:26.578 }, 00:12:26.578 { 00:12:26.578 "name": "BaseBdev3", 00:12:26.578 "uuid": "7c8d1d6e-a07c-5ce8-9ecf-1d44ed0c786e", 00:12:26.578 "is_configured": true, 00:12:26.578 "data_offset": 2048, 00:12:26.578 "data_size": 63488 00:12:26.578 }, 00:12:26.578 { 00:12:26.578 "name": "BaseBdev4", 00:12:26.578 "uuid": "9c978172-9986-582f-8194-22e736a09053", 00:12:26.578 "is_configured": true, 00:12:26.578 "data_offset": 2048, 00:12:26.578 "data_size": 63488 00:12:26.578 } 00:12:26.578 ] 00:12:26.578 }' 00:12:26.578 09:49:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.578 09:49:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.148 09:49:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.148 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.148 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.148 [2024-11-27 09:49:28.066320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.148 [2024-11-27 09:49:28.066424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.148 [2024-11-27 09:49:28.069192] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.148 [2024-11-27 09:49:28.069258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.148 [2024-11-27 09:49:28.069305] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.149 [2024-11-27 09:49:28.069317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.149 { 00:12:27.149 "results": [ 00:12:27.149 { 00:12:27.149 "job": "raid_bdev1", 00:12:27.149 "core_mask": "0x1", 00:12:27.149 "workload": "randrw", 00:12:27.149 "percentage": 50, 00:12:27.149 "status": "finished", 00:12:27.149 "queue_depth": 1, 00:12:27.149 "io_size": 131072, 00:12:27.149 "runtime": 1.370203, 00:12:27.149 "iops": 13375.390361866088, 00:12:27.149 "mibps": 1671.923795233261, 00:12:27.149 "io_failed": 1, 00:12:27.149 "io_timeout": 0, 00:12:27.149 "avg_latency_us": 105.10631500898714, 00:12:27.149 "min_latency_us": 25.9353711790393, 00:12:27.149 "max_latency_us": 1402.2986899563318 00:12:27.149 } 00:12:27.149 ], 00:12:27.149 "core_count": 1 00:12:27.149 } 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71402 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71402 ']' 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71402 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71402 00:12:27.149 killing process with pid 71402 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71402' 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71402 00:12:27.149 [2024-11-27 09:49:28.116043] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.149 09:49:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71402 00:12:27.409 [2024-11-27 09:49:28.476215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.j7Gwm8avOM 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:28.791 ************************************ 00:12:28.791 END TEST raid_write_error_test 00:12:28.791 ************************************ 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:28.791 00:12:28.791 real 0m4.859s 00:12:28.791 user 0m5.550s 00:12:28.791 sys 0m0.719s 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.791 09:49:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.791 09:49:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:28.791 09:49:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:28.791 09:49:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:28.791 09:49:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.791 09:49:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.791 ************************************ 00:12:28.791 START TEST raid_state_function_test 00:12:28.791 ************************************ 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71547 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71547' 00:12:28.791 Process raid pid: 71547 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71547 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71547 ']' 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.791 09:49:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.051 [2024-11-27 09:49:29.949070] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:29.051 [2024-11-27 09:49:29.949290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.051 [2024-11-27 09:49:30.129165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.312 [2024-11-27 09:49:30.271639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.572 [2024-11-27 09:49:30.520107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.572 [2024-11-27 09:49:30.520292] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.831 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.831 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.831 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:29.831 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.831 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.831 [2024-11-27 09:49:30.789441] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:29.831 [2024-11-27 09:49:30.789562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:29.831 [2024-11-27 09:49:30.789597] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:29.831 [2024-11-27 09:49:30.789623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:29.831 [2024-11-27 09:49:30.789643] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:29.831 [2024-11-27 09:49:30.789682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:29.832 [2024-11-27 09:49:30.789702] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:29.832 [2024-11-27 09:49:30.789747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.832 "name": "Existed_Raid", 00:12:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.832 "strip_size_kb": 64, 00:12:29.832 "state": "configuring", 00:12:29.832 "raid_level": "concat", 00:12:29.832 "superblock": false, 00:12:29.832 "num_base_bdevs": 4, 00:12:29.832 "num_base_bdevs_discovered": 0, 00:12:29.832 "num_base_bdevs_operational": 4, 00:12:29.832 "base_bdevs_list": [ 00:12:29.832 { 00:12:29.832 "name": "BaseBdev1", 00:12:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.832 "is_configured": false, 00:12:29.832 "data_offset": 0, 00:12:29.832 "data_size": 0 00:12:29.832 }, 00:12:29.832 { 00:12:29.832 "name": "BaseBdev2", 00:12:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.832 "is_configured": false, 00:12:29.832 "data_offset": 0, 00:12:29.832 "data_size": 0 00:12:29.832 }, 00:12:29.832 { 00:12:29.832 "name": "BaseBdev3", 00:12:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.832 "is_configured": false, 00:12:29.832 "data_offset": 0, 00:12:29.832 "data_size": 0 00:12:29.832 }, 00:12:29.832 { 00:12:29.832 "name": "BaseBdev4", 00:12:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.832 "is_configured": false, 00:12:29.832 "data_offset": 0, 00:12:29.832 "data_size": 0 00:12:29.832 } 00:12:29.832 ] 00:12:29.832 }' 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.832 09:49:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.092 [2024-11-27 09:49:31.180724] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.092 [2024-11-27 09:49:31.180852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.092 [2024-11-27 09:49:31.192665] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:30.092 [2024-11-27 09:49:31.192769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:30.092 [2024-11-27 09:49:31.192802] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.092 [2024-11-27 09:49:31.192827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.092 [2024-11-27 09:49:31.192846] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.092 [2024-11-27 09:49:31.192867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.092 [2024-11-27 09:49:31.192891] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.092 [2024-11-27 09:49:31.192915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.092 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.352 [2024-11-27 09:49:31.247424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.352 BaseBdev1 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.352 [ 00:12:30.352 { 00:12:30.352 "name": "BaseBdev1", 00:12:30.352 "aliases": [ 00:12:30.352 "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd" 00:12:30.352 ], 00:12:30.352 "product_name": "Malloc disk", 00:12:30.352 "block_size": 512, 00:12:30.352 "num_blocks": 65536, 00:12:30.352 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:30.352 "assigned_rate_limits": { 00:12:30.352 "rw_ios_per_sec": 0, 00:12:30.352 "rw_mbytes_per_sec": 0, 00:12:30.352 "r_mbytes_per_sec": 0, 00:12:30.352 "w_mbytes_per_sec": 0 00:12:30.352 }, 00:12:30.352 "claimed": true, 00:12:30.352 "claim_type": "exclusive_write", 00:12:30.352 "zoned": false, 00:12:30.352 "supported_io_types": { 00:12:30.352 "read": true, 00:12:30.352 "write": true, 00:12:30.352 "unmap": true, 00:12:30.352 "flush": true, 00:12:30.352 "reset": true, 00:12:30.352 "nvme_admin": false, 00:12:30.352 "nvme_io": false, 00:12:30.352 "nvme_io_md": false, 00:12:30.352 "write_zeroes": true, 00:12:30.352 "zcopy": true, 00:12:30.352 "get_zone_info": false, 00:12:30.352 "zone_management": false, 00:12:30.352 "zone_append": false, 00:12:30.352 "compare": false, 00:12:30.352 "compare_and_write": false, 00:12:30.352 "abort": true, 00:12:30.352 "seek_hole": false, 00:12:30.352 "seek_data": false, 00:12:30.352 "copy": true, 00:12:30.352 "nvme_iov_md": false 00:12:30.352 }, 00:12:30.352 "memory_domains": [ 00:12:30.352 { 00:12:30.352 "dma_device_id": "system", 00:12:30.352 "dma_device_type": 1 00:12:30.352 }, 00:12:30.352 { 00:12:30.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:30.352 "dma_device_type": 2 00:12:30.352 } 00:12:30.352 ], 00:12:30.352 "driver_specific": {} 00:12:30.352 } 00:12:30.352 ] 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.352 "name": "Existed_Raid", 00:12:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.352 "strip_size_kb": 64, 00:12:30.352 "state": "configuring", 00:12:30.352 "raid_level": "concat", 00:12:30.352 "superblock": false, 00:12:30.352 "num_base_bdevs": 4, 00:12:30.352 "num_base_bdevs_discovered": 1, 00:12:30.352 "num_base_bdevs_operational": 4, 00:12:30.352 "base_bdevs_list": [ 00:12:30.352 { 00:12:30.352 "name": "BaseBdev1", 00:12:30.352 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:30.352 "is_configured": true, 00:12:30.352 "data_offset": 0, 00:12:30.352 "data_size": 65536 00:12:30.352 }, 00:12:30.352 { 00:12:30.352 "name": "BaseBdev2", 00:12:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.352 "is_configured": false, 00:12:30.352 "data_offset": 0, 00:12:30.352 "data_size": 0 00:12:30.352 }, 00:12:30.352 { 00:12:30.352 "name": "BaseBdev3", 00:12:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.352 "is_configured": false, 00:12:30.352 "data_offset": 0, 00:12:30.352 "data_size": 0 00:12:30.352 }, 00:12:30.352 { 00:12:30.352 "name": "BaseBdev4", 00:12:30.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.352 "is_configured": false, 00:12:30.352 "data_offset": 0, 00:12:30.352 "data_size": 0 00:12:30.352 } 00:12:30.352 ] 00:12:30.352 }' 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.352 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.612 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:30.612 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.612 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.612 [2024-11-27 09:49:31.738642] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.612 [2024-11-27 09:49:31.738711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:30.612 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.873 [2024-11-27 09:49:31.750673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.873 [2024-11-27 09:49:31.752866] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.873 [2024-11-27 09:49:31.752913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.873 [2024-11-27 09:49:31.752923] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.873 [2024-11-27 09:49:31.752933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.873 [2024-11-27 09:49:31.752940] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.873 [2024-11-27 09:49:31.752949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.873 "name": "Existed_Raid", 00:12:30.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.873 "strip_size_kb": 64, 00:12:30.873 "state": "configuring", 00:12:30.873 "raid_level": "concat", 00:12:30.873 "superblock": false, 00:12:30.873 "num_base_bdevs": 4, 00:12:30.873 "num_base_bdevs_discovered": 1, 00:12:30.873 "num_base_bdevs_operational": 4, 00:12:30.873 "base_bdevs_list": [ 00:12:30.873 { 00:12:30.873 "name": "BaseBdev1", 00:12:30.873 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:30.873 "is_configured": true, 00:12:30.873 "data_offset": 0, 00:12:30.873 "data_size": 65536 00:12:30.873 }, 00:12:30.873 { 00:12:30.873 "name": "BaseBdev2", 00:12:30.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.873 "is_configured": false, 00:12:30.873 "data_offset": 0, 00:12:30.873 "data_size": 0 00:12:30.873 }, 00:12:30.873 { 00:12:30.873 "name": "BaseBdev3", 00:12:30.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.873 "is_configured": false, 00:12:30.873 "data_offset": 0, 00:12:30.873 "data_size": 0 00:12:30.873 }, 00:12:30.873 { 00:12:30.873 "name": "BaseBdev4", 00:12:30.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.873 "is_configured": false, 00:12:30.873 "data_offset": 0, 00:12:30.873 "data_size": 0 00:12:30.873 } 00:12:30.873 ] 00:12:30.873 }' 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.873 09:49:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.133 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:31.133 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.133 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.393 [2024-11-27 09:49:32.273903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.393 BaseBdev2 00:12:31.393 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.393 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:31.393 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:31.393 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.393 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.394 [ 00:12:31.394 { 00:12:31.394 "name": "BaseBdev2", 00:12:31.394 "aliases": [ 00:12:31.394 "675b95f6-45fe-45c4-9ffd-fd94f6c200ae" 00:12:31.394 ], 00:12:31.394 "product_name": "Malloc disk", 00:12:31.394 "block_size": 512, 00:12:31.394 "num_blocks": 65536, 00:12:31.394 "uuid": "675b95f6-45fe-45c4-9ffd-fd94f6c200ae", 00:12:31.394 "assigned_rate_limits": { 00:12:31.394 "rw_ios_per_sec": 0, 00:12:31.394 "rw_mbytes_per_sec": 0, 00:12:31.394 "r_mbytes_per_sec": 0, 00:12:31.394 "w_mbytes_per_sec": 0 00:12:31.394 }, 00:12:31.394 "claimed": true, 00:12:31.394 "claim_type": "exclusive_write", 00:12:31.394 "zoned": false, 00:12:31.394 "supported_io_types": { 00:12:31.394 "read": true, 00:12:31.394 "write": true, 00:12:31.394 "unmap": true, 00:12:31.394 "flush": true, 00:12:31.394 "reset": true, 00:12:31.394 "nvme_admin": false, 00:12:31.394 "nvme_io": false, 00:12:31.394 "nvme_io_md": false, 00:12:31.394 "write_zeroes": true, 00:12:31.394 "zcopy": true, 00:12:31.394 "get_zone_info": false, 00:12:31.394 "zone_management": false, 00:12:31.394 "zone_append": false, 00:12:31.394 "compare": false, 00:12:31.394 "compare_and_write": false, 00:12:31.394 "abort": true, 00:12:31.394 "seek_hole": false, 00:12:31.394 "seek_data": false, 00:12:31.394 "copy": true, 00:12:31.394 "nvme_iov_md": false 00:12:31.394 }, 00:12:31.394 "memory_domains": [ 00:12:31.394 { 00:12:31.394 "dma_device_id": "system", 00:12:31.394 "dma_device_type": 1 00:12:31.394 }, 00:12:31.394 { 00:12:31.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.394 "dma_device_type": 2 00:12:31.394 } 00:12:31.394 ], 00:12:31.394 "driver_specific": {} 00:12:31.394 } 00:12:31.394 ] 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.394 "name": "Existed_Raid", 00:12:31.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.394 "strip_size_kb": 64, 00:12:31.394 "state": "configuring", 00:12:31.394 "raid_level": "concat", 00:12:31.394 "superblock": false, 00:12:31.394 "num_base_bdevs": 4, 00:12:31.394 "num_base_bdevs_discovered": 2, 00:12:31.394 "num_base_bdevs_operational": 4, 00:12:31.394 "base_bdevs_list": [ 00:12:31.394 { 00:12:31.394 "name": "BaseBdev1", 00:12:31.394 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:31.394 "is_configured": true, 00:12:31.394 "data_offset": 0, 00:12:31.394 "data_size": 65536 00:12:31.394 }, 00:12:31.394 { 00:12:31.394 "name": "BaseBdev2", 00:12:31.394 "uuid": "675b95f6-45fe-45c4-9ffd-fd94f6c200ae", 00:12:31.394 "is_configured": true, 00:12:31.394 "data_offset": 0, 00:12:31.394 "data_size": 65536 00:12:31.394 }, 00:12:31.394 { 00:12:31.394 "name": "BaseBdev3", 00:12:31.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.394 "is_configured": false, 00:12:31.394 "data_offset": 0, 00:12:31.394 "data_size": 0 00:12:31.394 }, 00:12:31.394 { 00:12:31.394 "name": "BaseBdev4", 00:12:31.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.394 "is_configured": false, 00:12:31.394 "data_offset": 0, 00:12:31.394 "data_size": 0 00:12:31.394 } 00:12:31.394 ] 00:12:31.394 }' 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.394 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.654 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.654 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.654 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.654 [2024-11-27 09:49:32.784522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.654 BaseBdev3 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.919 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.919 [ 00:12:31.919 { 00:12:31.919 "name": "BaseBdev3", 00:12:31.919 "aliases": [ 00:12:31.919 "d4744055-12a4-41dd-8943-7d3b3835c18c" 00:12:31.919 ], 00:12:31.919 "product_name": "Malloc disk", 00:12:31.919 "block_size": 512, 00:12:31.919 "num_blocks": 65536, 00:12:31.919 "uuid": "d4744055-12a4-41dd-8943-7d3b3835c18c", 00:12:31.919 "assigned_rate_limits": { 00:12:31.919 "rw_ios_per_sec": 0, 00:12:31.919 "rw_mbytes_per_sec": 0, 00:12:31.919 "r_mbytes_per_sec": 0, 00:12:31.920 "w_mbytes_per_sec": 0 00:12:31.920 }, 00:12:31.920 "claimed": true, 00:12:31.920 "claim_type": "exclusive_write", 00:12:31.920 "zoned": false, 00:12:31.920 "supported_io_types": { 00:12:31.920 "read": true, 00:12:31.920 "write": true, 00:12:31.920 "unmap": true, 00:12:31.920 "flush": true, 00:12:31.920 "reset": true, 00:12:31.920 "nvme_admin": false, 00:12:31.920 "nvme_io": false, 00:12:31.920 "nvme_io_md": false, 00:12:31.920 "write_zeroes": true, 00:12:31.920 "zcopy": true, 00:12:31.920 "get_zone_info": false, 00:12:31.920 "zone_management": false, 00:12:31.920 "zone_append": false, 00:12:31.920 "compare": false, 00:12:31.920 "compare_and_write": false, 00:12:31.920 "abort": true, 00:12:31.920 "seek_hole": false, 00:12:31.920 "seek_data": false, 00:12:31.920 "copy": true, 00:12:31.920 "nvme_iov_md": false 00:12:31.920 }, 00:12:31.920 "memory_domains": [ 00:12:31.920 { 00:12:31.920 "dma_device_id": "system", 00:12:31.920 "dma_device_type": 1 00:12:31.920 }, 00:12:31.920 { 00:12:31.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.920 "dma_device_type": 2 00:12:31.920 } 00:12:31.920 ], 00:12:31.920 "driver_specific": {} 00:12:31.920 } 00:12:31.920 ] 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.920 "name": "Existed_Raid", 00:12:31.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.920 "strip_size_kb": 64, 00:12:31.920 "state": "configuring", 00:12:31.920 "raid_level": "concat", 00:12:31.920 "superblock": false, 00:12:31.920 "num_base_bdevs": 4, 00:12:31.920 "num_base_bdevs_discovered": 3, 00:12:31.920 "num_base_bdevs_operational": 4, 00:12:31.920 "base_bdevs_list": [ 00:12:31.920 { 00:12:31.920 "name": "BaseBdev1", 00:12:31.920 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:31.920 "is_configured": true, 00:12:31.920 "data_offset": 0, 00:12:31.920 "data_size": 65536 00:12:31.920 }, 00:12:31.920 { 00:12:31.920 "name": "BaseBdev2", 00:12:31.920 "uuid": "675b95f6-45fe-45c4-9ffd-fd94f6c200ae", 00:12:31.920 "is_configured": true, 00:12:31.920 "data_offset": 0, 00:12:31.920 "data_size": 65536 00:12:31.920 }, 00:12:31.920 { 00:12:31.920 "name": "BaseBdev3", 00:12:31.920 "uuid": "d4744055-12a4-41dd-8943-7d3b3835c18c", 00:12:31.920 "is_configured": true, 00:12:31.920 "data_offset": 0, 00:12:31.920 "data_size": 65536 00:12:31.920 }, 00:12:31.920 { 00:12:31.920 "name": "BaseBdev4", 00:12:31.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.920 "is_configured": false, 00:12:31.920 "data_offset": 0, 00:12:31.920 "data_size": 0 00:12:31.920 } 00:12:31.920 ] 00:12:31.920 }' 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.920 09:49:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.186 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:32.186 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.186 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.446 [2024-11-27 09:49:33.355617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.446 [2024-11-27 09:49:33.355681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:32.446 [2024-11-27 09:49:33.355689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:32.446 [2024-11-27 09:49:33.355989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:32.446 [2024-11-27 09:49:33.356232] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:32.446 [2024-11-27 09:49:33.356248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:32.446 [2024-11-27 09:49:33.356542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.446 BaseBdev4 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.446 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.446 [ 00:12:32.446 { 00:12:32.446 "name": "BaseBdev4", 00:12:32.446 "aliases": [ 00:12:32.446 "93f9f616-30ca-4c87-a3ed-df52557f12c7" 00:12:32.446 ], 00:12:32.446 "product_name": "Malloc disk", 00:12:32.446 "block_size": 512, 00:12:32.446 "num_blocks": 65536, 00:12:32.446 "uuid": "93f9f616-30ca-4c87-a3ed-df52557f12c7", 00:12:32.446 "assigned_rate_limits": { 00:12:32.446 "rw_ios_per_sec": 0, 00:12:32.447 "rw_mbytes_per_sec": 0, 00:12:32.447 "r_mbytes_per_sec": 0, 00:12:32.447 "w_mbytes_per_sec": 0 00:12:32.447 }, 00:12:32.447 "claimed": true, 00:12:32.447 "claim_type": "exclusive_write", 00:12:32.447 "zoned": false, 00:12:32.447 "supported_io_types": { 00:12:32.447 "read": true, 00:12:32.447 "write": true, 00:12:32.447 "unmap": true, 00:12:32.447 "flush": true, 00:12:32.447 "reset": true, 00:12:32.447 "nvme_admin": false, 00:12:32.447 "nvme_io": false, 00:12:32.447 "nvme_io_md": false, 00:12:32.447 "write_zeroes": true, 00:12:32.447 "zcopy": true, 00:12:32.447 "get_zone_info": false, 00:12:32.447 "zone_management": false, 00:12:32.447 "zone_append": false, 00:12:32.447 "compare": false, 00:12:32.447 "compare_and_write": false, 00:12:32.447 "abort": true, 00:12:32.447 "seek_hole": false, 00:12:32.447 "seek_data": false, 00:12:32.447 "copy": true, 00:12:32.447 "nvme_iov_md": false 00:12:32.447 }, 00:12:32.447 "memory_domains": [ 00:12:32.447 { 00:12:32.447 "dma_device_id": "system", 00:12:32.447 "dma_device_type": 1 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.447 "dma_device_type": 2 00:12:32.447 } 00:12:32.447 ], 00:12:32.447 "driver_specific": {} 00:12:32.447 } 00:12:32.447 ] 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.447 "name": "Existed_Raid", 00:12:32.447 "uuid": "d7b38fdd-3e42-4d1b-81f5-2c76ab928606", 00:12:32.447 "strip_size_kb": 64, 00:12:32.447 "state": "online", 00:12:32.447 "raid_level": "concat", 00:12:32.447 "superblock": false, 00:12:32.447 "num_base_bdevs": 4, 00:12:32.447 "num_base_bdevs_discovered": 4, 00:12:32.447 "num_base_bdevs_operational": 4, 00:12:32.447 "base_bdevs_list": [ 00:12:32.447 { 00:12:32.447 "name": "BaseBdev1", 00:12:32.447 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev2", 00:12:32.447 "uuid": "675b95f6-45fe-45c4-9ffd-fd94f6c200ae", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev3", 00:12:32.447 "uuid": "d4744055-12a4-41dd-8943-7d3b3835c18c", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 }, 00:12:32.447 { 00:12:32.447 "name": "BaseBdev4", 00:12:32.447 "uuid": "93f9f616-30ca-4c87-a3ed-df52557f12c7", 00:12:32.447 "is_configured": true, 00:12:32.447 "data_offset": 0, 00:12:32.447 "data_size": 65536 00:12:32.447 } 00:12:32.447 ] 00:12:32.447 }' 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.447 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.016 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:33.016 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:33.016 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:33.016 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:33.016 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:33.017 [2024-11-27 09:49:33.847223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:33.017 "name": "Existed_Raid", 00:12:33.017 "aliases": [ 00:12:33.017 "d7b38fdd-3e42-4d1b-81f5-2c76ab928606" 00:12:33.017 ], 00:12:33.017 "product_name": "Raid Volume", 00:12:33.017 "block_size": 512, 00:12:33.017 "num_blocks": 262144, 00:12:33.017 "uuid": "d7b38fdd-3e42-4d1b-81f5-2c76ab928606", 00:12:33.017 "assigned_rate_limits": { 00:12:33.017 "rw_ios_per_sec": 0, 00:12:33.017 "rw_mbytes_per_sec": 0, 00:12:33.017 "r_mbytes_per_sec": 0, 00:12:33.017 "w_mbytes_per_sec": 0 00:12:33.017 }, 00:12:33.017 "claimed": false, 00:12:33.017 "zoned": false, 00:12:33.017 "supported_io_types": { 00:12:33.017 "read": true, 00:12:33.017 "write": true, 00:12:33.017 "unmap": true, 00:12:33.017 "flush": true, 00:12:33.017 "reset": true, 00:12:33.017 "nvme_admin": false, 00:12:33.017 "nvme_io": false, 00:12:33.017 "nvme_io_md": false, 00:12:33.017 "write_zeroes": true, 00:12:33.017 "zcopy": false, 00:12:33.017 "get_zone_info": false, 00:12:33.017 "zone_management": false, 00:12:33.017 "zone_append": false, 00:12:33.017 "compare": false, 00:12:33.017 "compare_and_write": false, 00:12:33.017 "abort": false, 00:12:33.017 "seek_hole": false, 00:12:33.017 "seek_data": false, 00:12:33.017 "copy": false, 00:12:33.017 "nvme_iov_md": false 00:12:33.017 }, 00:12:33.017 "memory_domains": [ 00:12:33.017 { 00:12:33.017 "dma_device_id": "system", 00:12:33.017 "dma_device_type": 1 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.017 "dma_device_type": 2 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "system", 00:12:33.017 "dma_device_type": 1 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.017 "dma_device_type": 2 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "system", 00:12:33.017 "dma_device_type": 1 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.017 "dma_device_type": 2 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "system", 00:12:33.017 "dma_device_type": 1 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.017 "dma_device_type": 2 00:12:33.017 } 00:12:33.017 ], 00:12:33.017 "driver_specific": { 00:12:33.017 "raid": { 00:12:33.017 "uuid": "d7b38fdd-3e42-4d1b-81f5-2c76ab928606", 00:12:33.017 "strip_size_kb": 64, 00:12:33.017 "state": "online", 00:12:33.017 "raid_level": "concat", 00:12:33.017 "superblock": false, 00:12:33.017 "num_base_bdevs": 4, 00:12:33.017 "num_base_bdevs_discovered": 4, 00:12:33.017 "num_base_bdevs_operational": 4, 00:12:33.017 "base_bdevs_list": [ 00:12:33.017 { 00:12:33.017 "name": "BaseBdev1", 00:12:33.017 "uuid": "2f2c0927-5670-4d2e-b4c8-52cbcf9e2fcd", 00:12:33.017 "is_configured": true, 00:12:33.017 "data_offset": 0, 00:12:33.017 "data_size": 65536 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "name": "BaseBdev2", 00:12:33.017 "uuid": "675b95f6-45fe-45c4-9ffd-fd94f6c200ae", 00:12:33.017 "is_configured": true, 00:12:33.017 "data_offset": 0, 00:12:33.017 "data_size": 65536 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "name": "BaseBdev3", 00:12:33.017 "uuid": "d4744055-12a4-41dd-8943-7d3b3835c18c", 00:12:33.017 "is_configured": true, 00:12:33.017 "data_offset": 0, 00:12:33.017 "data_size": 65536 00:12:33.017 }, 00:12:33.017 { 00:12:33.017 "name": "BaseBdev4", 00:12:33.017 "uuid": "93f9f616-30ca-4c87-a3ed-df52557f12c7", 00:12:33.017 "is_configured": true, 00:12:33.017 "data_offset": 0, 00:12:33.017 "data_size": 65536 00:12:33.017 } 00:12:33.017 ] 00:12:33.017 } 00:12:33.017 } 00:12:33.017 }' 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:33.017 BaseBdev2 00:12:33.017 BaseBdev3 00:12:33.017 BaseBdev4' 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.017 09:49:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.017 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.017 [2024-11-27 09:49:34.130419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.017 [2024-11-27 09:49:34.130456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.017 [2024-11-27 09:49:34.130515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:33.277 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.278 "name": "Existed_Raid", 00:12:33.278 "uuid": "d7b38fdd-3e42-4d1b-81f5-2c76ab928606", 00:12:33.278 "strip_size_kb": 64, 00:12:33.278 "state": "offline", 00:12:33.278 "raid_level": "concat", 00:12:33.278 "superblock": false, 00:12:33.278 "num_base_bdevs": 4, 00:12:33.278 "num_base_bdevs_discovered": 3, 00:12:33.278 "num_base_bdevs_operational": 3, 00:12:33.278 "base_bdevs_list": [ 00:12:33.278 { 00:12:33.278 "name": null, 00:12:33.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.278 "is_configured": false, 00:12:33.278 "data_offset": 0, 00:12:33.278 "data_size": 65536 00:12:33.278 }, 00:12:33.278 { 00:12:33.278 "name": "BaseBdev2", 00:12:33.278 "uuid": "675b95f6-45fe-45c4-9ffd-fd94f6c200ae", 00:12:33.278 "is_configured": true, 00:12:33.278 "data_offset": 0, 00:12:33.278 "data_size": 65536 00:12:33.278 }, 00:12:33.278 { 00:12:33.278 "name": "BaseBdev3", 00:12:33.278 "uuid": "d4744055-12a4-41dd-8943-7d3b3835c18c", 00:12:33.278 "is_configured": true, 00:12:33.278 "data_offset": 0, 00:12:33.278 "data_size": 65536 00:12:33.278 }, 00:12:33.278 { 00:12:33.278 "name": "BaseBdev4", 00:12:33.278 "uuid": "93f9f616-30ca-4c87-a3ed-df52557f12c7", 00:12:33.278 "is_configured": true, 00:12:33.278 "data_offset": 0, 00:12:33.278 "data_size": 65536 00:12:33.278 } 00:12:33.278 ] 00:12:33.278 }' 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.278 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:33.536 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.795 [2024-11-27 09:49:34.704163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.795 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.795 [2024-11-27 09:49:34.863349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.055 09:49:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.055 [2024-11-27 09:49:35.016038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:34.055 [2024-11-27 09:49:35.016100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.055 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.056 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.316 BaseBdev2 00:12:34.316 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.316 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:34.316 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:34.316 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.316 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 [ 00:12:34.317 { 00:12:34.317 "name": "BaseBdev2", 00:12:34.317 "aliases": [ 00:12:34.317 "ae4a1257-8790-4871-b861-da1165a2ee12" 00:12:34.317 ], 00:12:34.317 "product_name": "Malloc disk", 00:12:34.317 "block_size": 512, 00:12:34.317 "num_blocks": 65536, 00:12:34.317 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:34.317 "assigned_rate_limits": { 00:12:34.317 "rw_ios_per_sec": 0, 00:12:34.317 "rw_mbytes_per_sec": 0, 00:12:34.317 "r_mbytes_per_sec": 0, 00:12:34.317 "w_mbytes_per_sec": 0 00:12:34.317 }, 00:12:34.317 "claimed": false, 00:12:34.317 "zoned": false, 00:12:34.317 "supported_io_types": { 00:12:34.317 "read": true, 00:12:34.317 "write": true, 00:12:34.317 "unmap": true, 00:12:34.317 "flush": true, 00:12:34.317 "reset": true, 00:12:34.317 "nvme_admin": false, 00:12:34.317 "nvme_io": false, 00:12:34.317 "nvme_io_md": false, 00:12:34.317 "write_zeroes": true, 00:12:34.317 "zcopy": true, 00:12:34.317 "get_zone_info": false, 00:12:34.317 "zone_management": false, 00:12:34.317 "zone_append": false, 00:12:34.317 "compare": false, 00:12:34.317 "compare_and_write": false, 00:12:34.317 "abort": true, 00:12:34.317 "seek_hole": false, 00:12:34.317 "seek_data": false, 00:12:34.317 "copy": true, 00:12:34.317 "nvme_iov_md": false 00:12:34.317 }, 00:12:34.317 "memory_domains": [ 00:12:34.317 { 00:12:34.317 "dma_device_id": "system", 00:12:34.317 "dma_device_type": 1 00:12:34.317 }, 00:12:34.317 { 00:12:34.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.317 "dma_device_type": 2 00:12:34.317 } 00:12:34.317 ], 00:12:34.317 "driver_specific": {} 00:12:34.317 } 00:12:34.317 ] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 BaseBdev3 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 [ 00:12:34.317 { 00:12:34.317 "name": "BaseBdev3", 00:12:34.317 "aliases": [ 00:12:34.317 "7438c3a7-9580-4a87-a157-92824f7d6f79" 00:12:34.317 ], 00:12:34.317 "product_name": "Malloc disk", 00:12:34.317 "block_size": 512, 00:12:34.317 "num_blocks": 65536, 00:12:34.317 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:34.317 "assigned_rate_limits": { 00:12:34.317 "rw_ios_per_sec": 0, 00:12:34.317 "rw_mbytes_per_sec": 0, 00:12:34.317 "r_mbytes_per_sec": 0, 00:12:34.317 "w_mbytes_per_sec": 0 00:12:34.317 }, 00:12:34.317 "claimed": false, 00:12:34.317 "zoned": false, 00:12:34.317 "supported_io_types": { 00:12:34.317 "read": true, 00:12:34.317 "write": true, 00:12:34.317 "unmap": true, 00:12:34.317 "flush": true, 00:12:34.317 "reset": true, 00:12:34.317 "nvme_admin": false, 00:12:34.317 "nvme_io": false, 00:12:34.317 "nvme_io_md": false, 00:12:34.317 "write_zeroes": true, 00:12:34.317 "zcopy": true, 00:12:34.317 "get_zone_info": false, 00:12:34.317 "zone_management": false, 00:12:34.317 "zone_append": false, 00:12:34.317 "compare": false, 00:12:34.317 "compare_and_write": false, 00:12:34.317 "abort": true, 00:12:34.317 "seek_hole": false, 00:12:34.317 "seek_data": false, 00:12:34.317 "copy": true, 00:12:34.317 "nvme_iov_md": false 00:12:34.317 }, 00:12:34.317 "memory_domains": [ 00:12:34.317 { 00:12:34.317 "dma_device_id": "system", 00:12:34.317 "dma_device_type": 1 00:12:34.317 }, 00:12:34.317 { 00:12:34.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.317 "dma_device_type": 2 00:12:34.317 } 00:12:34.317 ], 00:12:34.317 "driver_specific": {} 00:12:34.317 } 00:12:34.317 ] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 BaseBdev4 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.317 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.317 [ 00:12:34.317 { 00:12:34.317 "name": "BaseBdev4", 00:12:34.317 "aliases": [ 00:12:34.317 "b41cc3d5-228c-413c-bf17-c42481af19cf" 00:12:34.317 ], 00:12:34.317 "product_name": "Malloc disk", 00:12:34.317 "block_size": 512, 00:12:34.317 "num_blocks": 65536, 00:12:34.317 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:34.317 "assigned_rate_limits": { 00:12:34.317 "rw_ios_per_sec": 0, 00:12:34.317 "rw_mbytes_per_sec": 0, 00:12:34.317 "r_mbytes_per_sec": 0, 00:12:34.317 "w_mbytes_per_sec": 0 00:12:34.317 }, 00:12:34.317 "claimed": false, 00:12:34.317 "zoned": false, 00:12:34.317 "supported_io_types": { 00:12:34.317 "read": true, 00:12:34.317 "write": true, 00:12:34.317 "unmap": true, 00:12:34.317 "flush": true, 00:12:34.317 "reset": true, 00:12:34.317 "nvme_admin": false, 00:12:34.317 "nvme_io": false, 00:12:34.317 "nvme_io_md": false, 00:12:34.317 "write_zeroes": true, 00:12:34.317 "zcopy": true, 00:12:34.317 "get_zone_info": false, 00:12:34.317 "zone_management": false, 00:12:34.317 "zone_append": false, 00:12:34.317 "compare": false, 00:12:34.317 "compare_and_write": false, 00:12:34.317 "abort": true, 00:12:34.317 "seek_hole": false, 00:12:34.317 "seek_data": false, 00:12:34.317 "copy": true, 00:12:34.317 "nvme_iov_md": false 00:12:34.317 }, 00:12:34.317 "memory_domains": [ 00:12:34.317 { 00:12:34.317 "dma_device_id": "system", 00:12:34.317 "dma_device_type": 1 00:12:34.317 }, 00:12:34.317 { 00:12:34.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:34.318 "dma_device_type": 2 00:12:34.318 } 00:12:34.318 ], 00:12:34.318 "driver_specific": {} 00:12:34.318 } 00:12:34.318 ] 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.318 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.318 [2024-11-27 09:49:35.444805] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.318 [2024-11-27 09:49:35.444902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.318 [2024-11-27 09:49:35.444958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.578 [2024-11-27 09:49:35.447175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.578 [2024-11-27 09:49:35.447269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.578 "name": "Existed_Raid", 00:12:34.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.578 "strip_size_kb": 64, 00:12:34.578 "state": "configuring", 00:12:34.578 "raid_level": "concat", 00:12:34.578 "superblock": false, 00:12:34.578 "num_base_bdevs": 4, 00:12:34.578 "num_base_bdevs_discovered": 3, 00:12:34.578 "num_base_bdevs_operational": 4, 00:12:34.578 "base_bdevs_list": [ 00:12:34.578 { 00:12:34.578 "name": "BaseBdev1", 00:12:34.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.578 "is_configured": false, 00:12:34.578 "data_offset": 0, 00:12:34.578 "data_size": 0 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "name": "BaseBdev2", 00:12:34.578 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:34.578 "is_configured": true, 00:12:34.578 "data_offset": 0, 00:12:34.578 "data_size": 65536 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "name": "BaseBdev3", 00:12:34.578 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:34.578 "is_configured": true, 00:12:34.578 "data_offset": 0, 00:12:34.578 "data_size": 65536 00:12:34.578 }, 00:12:34.578 { 00:12:34.578 "name": "BaseBdev4", 00:12:34.578 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:34.578 "is_configured": true, 00:12:34.578 "data_offset": 0, 00:12:34.578 "data_size": 65536 00:12:34.578 } 00:12:34.578 ] 00:12:34.578 }' 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.578 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.838 [2024-11-27 09:49:35.884122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.838 "name": "Existed_Raid", 00:12:34.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.838 "strip_size_kb": 64, 00:12:34.838 "state": "configuring", 00:12:34.838 "raid_level": "concat", 00:12:34.838 "superblock": false, 00:12:34.838 "num_base_bdevs": 4, 00:12:34.838 "num_base_bdevs_discovered": 2, 00:12:34.838 "num_base_bdevs_operational": 4, 00:12:34.838 "base_bdevs_list": [ 00:12:34.838 { 00:12:34.838 "name": "BaseBdev1", 00:12:34.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.838 "is_configured": false, 00:12:34.838 "data_offset": 0, 00:12:34.838 "data_size": 0 00:12:34.838 }, 00:12:34.838 { 00:12:34.838 "name": null, 00:12:34.838 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:34.838 "is_configured": false, 00:12:34.838 "data_offset": 0, 00:12:34.838 "data_size": 65536 00:12:34.838 }, 00:12:34.838 { 00:12:34.838 "name": "BaseBdev3", 00:12:34.838 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:34.838 "is_configured": true, 00:12:34.838 "data_offset": 0, 00:12:34.838 "data_size": 65536 00:12:34.838 }, 00:12:34.838 { 00:12:34.838 "name": "BaseBdev4", 00:12:34.838 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:34.838 "is_configured": true, 00:12:34.838 "data_offset": 0, 00:12:34.838 "data_size": 65536 00:12:34.838 } 00:12:34.838 ] 00:12:34.838 }' 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.838 09:49:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 [2024-11-27 09:49:36.414164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.409 BaseBdev1 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.409 [ 00:12:35.409 { 00:12:35.409 "name": "BaseBdev1", 00:12:35.409 "aliases": [ 00:12:35.409 "aff221f5-300a-483e-8e24-643e2dc708d2" 00:12:35.409 ], 00:12:35.409 "product_name": "Malloc disk", 00:12:35.409 "block_size": 512, 00:12:35.409 "num_blocks": 65536, 00:12:35.409 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:35.409 "assigned_rate_limits": { 00:12:35.409 "rw_ios_per_sec": 0, 00:12:35.409 "rw_mbytes_per_sec": 0, 00:12:35.409 "r_mbytes_per_sec": 0, 00:12:35.409 "w_mbytes_per_sec": 0 00:12:35.409 }, 00:12:35.409 "claimed": true, 00:12:35.409 "claim_type": "exclusive_write", 00:12:35.409 "zoned": false, 00:12:35.409 "supported_io_types": { 00:12:35.409 "read": true, 00:12:35.409 "write": true, 00:12:35.409 "unmap": true, 00:12:35.409 "flush": true, 00:12:35.409 "reset": true, 00:12:35.409 "nvme_admin": false, 00:12:35.409 "nvme_io": false, 00:12:35.409 "nvme_io_md": false, 00:12:35.409 "write_zeroes": true, 00:12:35.409 "zcopy": true, 00:12:35.409 "get_zone_info": false, 00:12:35.409 "zone_management": false, 00:12:35.409 "zone_append": false, 00:12:35.409 "compare": false, 00:12:35.409 "compare_and_write": false, 00:12:35.409 "abort": true, 00:12:35.409 "seek_hole": false, 00:12:35.409 "seek_data": false, 00:12:35.409 "copy": true, 00:12:35.409 "nvme_iov_md": false 00:12:35.409 }, 00:12:35.409 "memory_domains": [ 00:12:35.409 { 00:12:35.409 "dma_device_id": "system", 00:12:35.409 "dma_device_type": 1 00:12:35.409 }, 00:12:35.409 { 00:12:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.409 "dma_device_type": 2 00:12:35.409 } 00:12:35.409 ], 00:12:35.409 "driver_specific": {} 00:12:35.409 } 00:12:35.409 ] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.409 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.410 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.410 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.410 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.410 "name": "Existed_Raid", 00:12:35.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.410 "strip_size_kb": 64, 00:12:35.410 "state": "configuring", 00:12:35.410 "raid_level": "concat", 00:12:35.410 "superblock": false, 00:12:35.410 "num_base_bdevs": 4, 00:12:35.410 "num_base_bdevs_discovered": 3, 00:12:35.410 "num_base_bdevs_operational": 4, 00:12:35.410 "base_bdevs_list": [ 00:12:35.410 { 00:12:35.410 "name": "BaseBdev1", 00:12:35.410 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:35.410 "is_configured": true, 00:12:35.410 "data_offset": 0, 00:12:35.410 "data_size": 65536 00:12:35.410 }, 00:12:35.410 { 00:12:35.410 "name": null, 00:12:35.410 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:35.410 "is_configured": false, 00:12:35.410 "data_offset": 0, 00:12:35.410 "data_size": 65536 00:12:35.410 }, 00:12:35.410 { 00:12:35.410 "name": "BaseBdev3", 00:12:35.410 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:35.410 "is_configured": true, 00:12:35.410 "data_offset": 0, 00:12:35.410 "data_size": 65536 00:12:35.410 }, 00:12:35.410 { 00:12:35.410 "name": "BaseBdev4", 00:12:35.410 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:35.410 "is_configured": true, 00:12:35.410 "data_offset": 0, 00:12:35.410 "data_size": 65536 00:12:35.410 } 00:12:35.410 ] 00:12:35.410 }' 00:12:35.410 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.410 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.978 [2024-11-27 09:49:36.889459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.978 "name": "Existed_Raid", 00:12:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.978 "strip_size_kb": 64, 00:12:35.978 "state": "configuring", 00:12:35.978 "raid_level": "concat", 00:12:35.978 "superblock": false, 00:12:35.978 "num_base_bdevs": 4, 00:12:35.978 "num_base_bdevs_discovered": 2, 00:12:35.978 "num_base_bdevs_operational": 4, 00:12:35.978 "base_bdevs_list": [ 00:12:35.978 { 00:12:35.978 "name": "BaseBdev1", 00:12:35.978 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:35.978 "is_configured": true, 00:12:35.978 "data_offset": 0, 00:12:35.978 "data_size": 65536 00:12:35.978 }, 00:12:35.978 { 00:12:35.978 "name": null, 00:12:35.978 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:35.978 "is_configured": false, 00:12:35.978 "data_offset": 0, 00:12:35.978 "data_size": 65536 00:12:35.978 }, 00:12:35.978 { 00:12:35.978 "name": null, 00:12:35.978 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:35.978 "is_configured": false, 00:12:35.978 "data_offset": 0, 00:12:35.978 "data_size": 65536 00:12:35.978 }, 00:12:35.978 { 00:12:35.978 "name": "BaseBdev4", 00:12:35.978 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:35.978 "is_configured": true, 00:12:35.978 "data_offset": 0, 00:12:35.978 "data_size": 65536 00:12:35.978 } 00:12:35.978 ] 00:12:35.978 }' 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.978 09:49:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.238 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.238 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.238 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.238 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 [2024-11-27 09:49:37.388577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.498 "name": "Existed_Raid", 00:12:36.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.498 "strip_size_kb": 64, 00:12:36.498 "state": "configuring", 00:12:36.498 "raid_level": "concat", 00:12:36.498 "superblock": false, 00:12:36.498 "num_base_bdevs": 4, 00:12:36.498 "num_base_bdevs_discovered": 3, 00:12:36.498 "num_base_bdevs_operational": 4, 00:12:36.498 "base_bdevs_list": [ 00:12:36.498 { 00:12:36.498 "name": "BaseBdev1", 00:12:36.498 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:36.498 "is_configured": true, 00:12:36.498 "data_offset": 0, 00:12:36.498 "data_size": 65536 00:12:36.498 }, 00:12:36.498 { 00:12:36.498 "name": null, 00:12:36.498 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:36.498 "is_configured": false, 00:12:36.498 "data_offset": 0, 00:12:36.498 "data_size": 65536 00:12:36.498 }, 00:12:36.498 { 00:12:36.498 "name": "BaseBdev3", 00:12:36.498 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:36.498 "is_configured": true, 00:12:36.498 "data_offset": 0, 00:12:36.498 "data_size": 65536 00:12:36.498 }, 00:12:36.498 { 00:12:36.498 "name": "BaseBdev4", 00:12:36.498 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:36.498 "is_configured": true, 00:12:36.498 "data_offset": 0, 00:12:36.498 "data_size": 65536 00:12:36.498 } 00:12:36.498 ] 00:12:36.498 }' 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.498 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.757 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.757 [2024-11-27 09:49:37.887759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.015 09:49:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.015 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.015 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.015 "name": "Existed_Raid", 00:12:37.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.015 "strip_size_kb": 64, 00:12:37.015 "state": "configuring", 00:12:37.015 "raid_level": "concat", 00:12:37.015 "superblock": false, 00:12:37.015 "num_base_bdevs": 4, 00:12:37.015 "num_base_bdevs_discovered": 2, 00:12:37.015 "num_base_bdevs_operational": 4, 00:12:37.015 "base_bdevs_list": [ 00:12:37.015 { 00:12:37.015 "name": null, 00:12:37.015 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:37.015 "is_configured": false, 00:12:37.015 "data_offset": 0, 00:12:37.015 "data_size": 65536 00:12:37.015 }, 00:12:37.015 { 00:12:37.015 "name": null, 00:12:37.015 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:37.015 "is_configured": false, 00:12:37.015 "data_offset": 0, 00:12:37.015 "data_size": 65536 00:12:37.016 }, 00:12:37.016 { 00:12:37.016 "name": "BaseBdev3", 00:12:37.016 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:37.016 "is_configured": true, 00:12:37.016 "data_offset": 0, 00:12:37.016 "data_size": 65536 00:12:37.016 }, 00:12:37.016 { 00:12:37.016 "name": "BaseBdev4", 00:12:37.016 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:37.016 "is_configured": true, 00:12:37.016 "data_offset": 0, 00:12:37.016 "data_size": 65536 00:12:37.016 } 00:12:37.016 ] 00:12:37.016 }' 00:12:37.016 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.016 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 [2024-11-27 09:49:38.476803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.585 "name": "Existed_Raid", 00:12:37.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.585 "strip_size_kb": 64, 00:12:37.585 "state": "configuring", 00:12:37.585 "raid_level": "concat", 00:12:37.585 "superblock": false, 00:12:37.585 "num_base_bdevs": 4, 00:12:37.585 "num_base_bdevs_discovered": 3, 00:12:37.585 "num_base_bdevs_operational": 4, 00:12:37.585 "base_bdevs_list": [ 00:12:37.585 { 00:12:37.585 "name": null, 00:12:37.585 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:37.585 "is_configured": false, 00:12:37.585 "data_offset": 0, 00:12:37.585 "data_size": 65536 00:12:37.585 }, 00:12:37.585 { 00:12:37.585 "name": "BaseBdev2", 00:12:37.585 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:37.585 "is_configured": true, 00:12:37.585 "data_offset": 0, 00:12:37.585 "data_size": 65536 00:12:37.585 }, 00:12:37.585 { 00:12:37.585 "name": "BaseBdev3", 00:12:37.585 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:37.585 "is_configured": true, 00:12:37.585 "data_offset": 0, 00:12:37.585 "data_size": 65536 00:12:37.585 }, 00:12:37.585 { 00:12:37.585 "name": "BaseBdev4", 00:12:37.585 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:37.585 "is_configured": true, 00:12:37.585 "data_offset": 0, 00:12:37.585 "data_size": 65536 00:12:37.585 } 00:12:37.585 ] 00:12:37.585 }' 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.585 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.845 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.105 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.105 09:49:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aff221f5-300a-483e-8e24-643e2dc708d2 00:12:38.105 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.105 09:49:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.105 [2024-11-27 09:49:39.043096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:38.105 [2024-11-27 09:49:39.043241] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:38.105 [2024-11-27 09:49:39.043255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:38.105 [2024-11-27 09:49:39.043604] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:38.105 [2024-11-27 09:49:39.043793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:38.105 [2024-11-27 09:49:39.043806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:38.105 [2024-11-27 09:49:39.044126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:38.105 NewBaseBdev 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.105 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.105 [ 00:12:38.105 { 00:12:38.105 "name": "NewBaseBdev", 00:12:38.105 "aliases": [ 00:12:38.105 "aff221f5-300a-483e-8e24-643e2dc708d2" 00:12:38.105 ], 00:12:38.105 "product_name": "Malloc disk", 00:12:38.105 "block_size": 512, 00:12:38.105 "num_blocks": 65536, 00:12:38.105 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:38.105 "assigned_rate_limits": { 00:12:38.105 "rw_ios_per_sec": 0, 00:12:38.105 "rw_mbytes_per_sec": 0, 00:12:38.105 "r_mbytes_per_sec": 0, 00:12:38.105 "w_mbytes_per_sec": 0 00:12:38.105 }, 00:12:38.105 "claimed": true, 00:12:38.105 "claim_type": "exclusive_write", 00:12:38.105 "zoned": false, 00:12:38.105 "supported_io_types": { 00:12:38.105 "read": true, 00:12:38.105 "write": true, 00:12:38.105 "unmap": true, 00:12:38.105 "flush": true, 00:12:38.105 "reset": true, 00:12:38.105 "nvme_admin": false, 00:12:38.105 "nvme_io": false, 00:12:38.105 "nvme_io_md": false, 00:12:38.105 "write_zeroes": true, 00:12:38.105 "zcopy": true, 00:12:38.105 "get_zone_info": false, 00:12:38.105 "zone_management": false, 00:12:38.105 "zone_append": false, 00:12:38.105 "compare": false, 00:12:38.105 "compare_and_write": false, 00:12:38.105 "abort": true, 00:12:38.105 "seek_hole": false, 00:12:38.105 "seek_data": false, 00:12:38.105 "copy": true, 00:12:38.105 "nvme_iov_md": false 00:12:38.105 }, 00:12:38.105 "memory_domains": [ 00:12:38.105 { 00:12:38.105 "dma_device_id": "system", 00:12:38.106 "dma_device_type": 1 00:12:38.106 }, 00:12:38.106 { 00:12:38.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.106 "dma_device_type": 2 00:12:38.106 } 00:12:38.106 ], 00:12:38.106 "driver_specific": {} 00:12:38.106 } 00:12:38.106 ] 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.106 "name": "Existed_Raid", 00:12:38.106 "uuid": "d0427579-3ce2-4007-9273-ec0cc1ce1cfd", 00:12:38.106 "strip_size_kb": 64, 00:12:38.106 "state": "online", 00:12:38.106 "raid_level": "concat", 00:12:38.106 "superblock": false, 00:12:38.106 "num_base_bdevs": 4, 00:12:38.106 "num_base_bdevs_discovered": 4, 00:12:38.106 "num_base_bdevs_operational": 4, 00:12:38.106 "base_bdevs_list": [ 00:12:38.106 { 00:12:38.106 "name": "NewBaseBdev", 00:12:38.106 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:38.106 "is_configured": true, 00:12:38.106 "data_offset": 0, 00:12:38.106 "data_size": 65536 00:12:38.106 }, 00:12:38.106 { 00:12:38.106 "name": "BaseBdev2", 00:12:38.106 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:38.106 "is_configured": true, 00:12:38.106 "data_offset": 0, 00:12:38.106 "data_size": 65536 00:12:38.106 }, 00:12:38.106 { 00:12:38.106 "name": "BaseBdev3", 00:12:38.106 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:38.106 "is_configured": true, 00:12:38.106 "data_offset": 0, 00:12:38.106 "data_size": 65536 00:12:38.106 }, 00:12:38.106 { 00:12:38.106 "name": "BaseBdev4", 00:12:38.106 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:38.106 "is_configured": true, 00:12:38.106 "data_offset": 0, 00:12:38.106 "data_size": 65536 00:12:38.106 } 00:12:38.106 ] 00:12:38.106 }' 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.106 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.674 [2024-11-27 09:49:39.554717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.674 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.674 "name": "Existed_Raid", 00:12:38.674 "aliases": [ 00:12:38.674 "d0427579-3ce2-4007-9273-ec0cc1ce1cfd" 00:12:38.674 ], 00:12:38.674 "product_name": "Raid Volume", 00:12:38.674 "block_size": 512, 00:12:38.674 "num_blocks": 262144, 00:12:38.674 "uuid": "d0427579-3ce2-4007-9273-ec0cc1ce1cfd", 00:12:38.674 "assigned_rate_limits": { 00:12:38.674 "rw_ios_per_sec": 0, 00:12:38.674 "rw_mbytes_per_sec": 0, 00:12:38.674 "r_mbytes_per_sec": 0, 00:12:38.674 "w_mbytes_per_sec": 0 00:12:38.674 }, 00:12:38.674 "claimed": false, 00:12:38.674 "zoned": false, 00:12:38.674 "supported_io_types": { 00:12:38.674 "read": true, 00:12:38.674 "write": true, 00:12:38.674 "unmap": true, 00:12:38.674 "flush": true, 00:12:38.674 "reset": true, 00:12:38.674 "nvme_admin": false, 00:12:38.674 "nvme_io": false, 00:12:38.674 "nvme_io_md": false, 00:12:38.674 "write_zeroes": true, 00:12:38.674 "zcopy": false, 00:12:38.674 "get_zone_info": false, 00:12:38.674 "zone_management": false, 00:12:38.674 "zone_append": false, 00:12:38.674 "compare": false, 00:12:38.674 "compare_and_write": false, 00:12:38.674 "abort": false, 00:12:38.674 "seek_hole": false, 00:12:38.674 "seek_data": false, 00:12:38.674 "copy": false, 00:12:38.674 "nvme_iov_md": false 00:12:38.674 }, 00:12:38.674 "memory_domains": [ 00:12:38.674 { 00:12:38.674 "dma_device_id": "system", 00:12:38.674 "dma_device_type": 1 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.674 "dma_device_type": 2 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "system", 00:12:38.674 "dma_device_type": 1 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.674 "dma_device_type": 2 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "system", 00:12:38.674 "dma_device_type": 1 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.674 "dma_device_type": 2 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "system", 00:12:38.674 "dma_device_type": 1 00:12:38.674 }, 00:12:38.674 { 00:12:38.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.674 "dma_device_type": 2 00:12:38.674 } 00:12:38.674 ], 00:12:38.674 "driver_specific": { 00:12:38.674 "raid": { 00:12:38.675 "uuid": "d0427579-3ce2-4007-9273-ec0cc1ce1cfd", 00:12:38.675 "strip_size_kb": 64, 00:12:38.675 "state": "online", 00:12:38.675 "raid_level": "concat", 00:12:38.675 "superblock": false, 00:12:38.675 "num_base_bdevs": 4, 00:12:38.675 "num_base_bdevs_discovered": 4, 00:12:38.675 "num_base_bdevs_operational": 4, 00:12:38.675 "base_bdevs_list": [ 00:12:38.675 { 00:12:38.675 "name": "NewBaseBdev", 00:12:38.675 "uuid": "aff221f5-300a-483e-8e24-643e2dc708d2", 00:12:38.675 "is_configured": true, 00:12:38.675 "data_offset": 0, 00:12:38.675 "data_size": 65536 00:12:38.675 }, 00:12:38.675 { 00:12:38.675 "name": "BaseBdev2", 00:12:38.675 "uuid": "ae4a1257-8790-4871-b861-da1165a2ee12", 00:12:38.675 "is_configured": true, 00:12:38.675 "data_offset": 0, 00:12:38.675 "data_size": 65536 00:12:38.675 }, 00:12:38.675 { 00:12:38.675 "name": "BaseBdev3", 00:12:38.675 "uuid": "7438c3a7-9580-4a87-a157-92824f7d6f79", 00:12:38.675 "is_configured": true, 00:12:38.675 "data_offset": 0, 00:12:38.675 "data_size": 65536 00:12:38.675 }, 00:12:38.675 { 00:12:38.675 "name": "BaseBdev4", 00:12:38.675 "uuid": "b41cc3d5-228c-413c-bf17-c42481af19cf", 00:12:38.675 "is_configured": true, 00:12:38.675 "data_offset": 0, 00:12:38.675 "data_size": 65536 00:12:38.675 } 00:12:38.675 ] 00:12:38.675 } 00:12:38.675 } 00:12:38.675 }' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:38.675 BaseBdev2 00:12:38.675 BaseBdev3 00:12:38.675 BaseBdev4' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.675 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.935 [2024-11-27 09:49:39.917673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.935 [2024-11-27 09:49:39.917766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:38.935 [2024-11-27 09:49:39.917907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.935 [2024-11-27 09:49:39.918041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.935 [2024-11-27 09:49:39.918097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71547 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71547 ']' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71547 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71547 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71547' 00:12:38.935 killing process with pid 71547 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71547 00:12:38.935 [2024-11-27 09:49:39.962297] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.935 09:49:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71547 00:12:39.505 [2024-11-27 09:49:40.480980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.888 ************************************ 00:12:40.888 END TEST raid_state_function_test 00:12:40.888 ************************************ 00:12:40.888 09:49:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:40.888 00:12:40.888 real 0m12.108s 00:12:40.888 user 0m18.730s 00:12:40.888 sys 0m2.246s 00:12:40.888 09:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.888 09:49:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 09:49:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:40.888 09:49:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:40.888 09:49:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.888 09:49:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.888 ************************************ 00:12:40.888 START TEST raid_state_function_test_sb 00:12:40.888 ************************************ 00:12:40.888 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:40.888 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:40.888 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:40.888 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:40.888 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72224 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72224' 00:12:41.148 Process raid pid: 72224 00:12:41.148 09:49:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72224 00:12:41.149 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72224 ']' 00:12:41.149 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.149 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.149 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.149 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.149 09:49:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.149 [2024-11-27 09:49:42.132140] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:41.149 [2024-11-27 09:49:42.132371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.409 [2024-11-27 09:49:42.324051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.409 [2024-11-27 09:49:42.483378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.669 [2024-11-27 09:49:42.763336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.669 [2024-11-27 09:49:42.763410] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.944 [2024-11-27 09:49:43.048871] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:41.944 [2024-11-27 09:49:43.048953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:41.944 [2024-11-27 09:49:43.048973] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.944 [2024-11-27 09:49:43.048987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.944 [2024-11-27 09:49:43.049006] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.944 [2024-11-27 09:49:43.049019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.944 [2024-11-27 09:49:43.049027] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:41.944 [2024-11-27 09:49:43.049038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.944 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.242 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.242 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.242 "name": "Existed_Raid", 00:12:42.242 "uuid": "43b2a844-1047-4211-9b3f-6f2599cc3620", 00:12:42.242 "strip_size_kb": 64, 00:12:42.242 "state": "configuring", 00:12:42.242 "raid_level": "concat", 00:12:42.242 "superblock": true, 00:12:42.242 "num_base_bdevs": 4, 00:12:42.242 "num_base_bdevs_discovered": 0, 00:12:42.242 "num_base_bdevs_operational": 4, 00:12:42.242 "base_bdevs_list": [ 00:12:42.242 { 00:12:42.242 "name": "BaseBdev1", 00:12:42.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.242 "is_configured": false, 00:12:42.242 "data_offset": 0, 00:12:42.242 "data_size": 0 00:12:42.242 }, 00:12:42.242 { 00:12:42.242 "name": "BaseBdev2", 00:12:42.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.242 "is_configured": false, 00:12:42.242 "data_offset": 0, 00:12:42.242 "data_size": 0 00:12:42.242 }, 00:12:42.242 { 00:12:42.242 "name": "BaseBdev3", 00:12:42.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.242 "is_configured": false, 00:12:42.242 "data_offset": 0, 00:12:42.242 "data_size": 0 00:12:42.242 }, 00:12:42.242 { 00:12:42.242 "name": "BaseBdev4", 00:12:42.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.242 "is_configured": false, 00:12:42.242 "data_offset": 0, 00:12:42.242 "data_size": 0 00:12:42.242 } 00:12:42.242 ] 00:12:42.242 }' 00:12:42.242 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.242 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [2024-11-27 09:49:43.512311] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:42.518 [2024-11-27 09:49:43.512373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [2024-11-27 09:49:43.524314] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:42.518 [2024-11-27 09:49:43.524430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:42.518 [2024-11-27 09:49:43.524450] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:42.518 [2024-11-27 09:49:43.524462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:42.518 [2024-11-27 09:49:43.524470] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:42.518 [2024-11-27 09:49:43.524482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:42.518 [2024-11-27 09:49:43.524490] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:42.518 [2024-11-27 09:49:43.524501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [2024-11-27 09:49:43.576767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.518 BaseBdev1 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.518 [ 00:12:42.518 { 00:12:42.518 "name": "BaseBdev1", 00:12:42.518 "aliases": [ 00:12:42.518 "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b" 00:12:42.518 ], 00:12:42.518 "product_name": "Malloc disk", 00:12:42.518 "block_size": 512, 00:12:42.518 "num_blocks": 65536, 00:12:42.518 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:42.518 "assigned_rate_limits": { 00:12:42.518 "rw_ios_per_sec": 0, 00:12:42.518 "rw_mbytes_per_sec": 0, 00:12:42.518 "r_mbytes_per_sec": 0, 00:12:42.518 "w_mbytes_per_sec": 0 00:12:42.518 }, 00:12:42.518 "claimed": true, 00:12:42.518 "claim_type": "exclusive_write", 00:12:42.518 "zoned": false, 00:12:42.518 "supported_io_types": { 00:12:42.518 "read": true, 00:12:42.518 "write": true, 00:12:42.518 "unmap": true, 00:12:42.518 "flush": true, 00:12:42.518 "reset": true, 00:12:42.518 "nvme_admin": false, 00:12:42.518 "nvme_io": false, 00:12:42.518 "nvme_io_md": false, 00:12:42.518 "write_zeroes": true, 00:12:42.518 "zcopy": true, 00:12:42.518 "get_zone_info": false, 00:12:42.518 "zone_management": false, 00:12:42.518 "zone_append": false, 00:12:42.518 "compare": false, 00:12:42.518 "compare_and_write": false, 00:12:42.518 "abort": true, 00:12:42.518 "seek_hole": false, 00:12:42.518 "seek_data": false, 00:12:42.518 "copy": true, 00:12:42.518 "nvme_iov_md": false 00:12:42.518 }, 00:12:42.518 "memory_domains": [ 00:12:42.518 { 00:12:42.518 "dma_device_id": "system", 00:12:42.518 "dma_device_type": 1 00:12:42.518 }, 00:12:42.518 { 00:12:42.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.518 "dma_device_type": 2 00:12:42.518 } 00:12:42.518 ], 00:12:42.518 "driver_specific": {} 00:12:42.518 } 00:12:42.518 ] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:42.518 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.519 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.779 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.779 "name": "Existed_Raid", 00:12:42.779 "uuid": "fc03f2b6-430c-456e-8cf8-965540ff506f", 00:12:42.779 "strip_size_kb": 64, 00:12:42.779 "state": "configuring", 00:12:42.779 "raid_level": "concat", 00:12:42.779 "superblock": true, 00:12:42.779 "num_base_bdevs": 4, 00:12:42.779 "num_base_bdevs_discovered": 1, 00:12:42.779 "num_base_bdevs_operational": 4, 00:12:42.779 "base_bdevs_list": [ 00:12:42.779 { 00:12:42.779 "name": "BaseBdev1", 00:12:42.779 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:42.779 "is_configured": true, 00:12:42.779 "data_offset": 2048, 00:12:42.779 "data_size": 63488 00:12:42.779 }, 00:12:42.779 { 00:12:42.779 "name": "BaseBdev2", 00:12:42.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.779 "is_configured": false, 00:12:42.779 "data_offset": 0, 00:12:42.779 "data_size": 0 00:12:42.779 }, 00:12:42.779 { 00:12:42.779 "name": "BaseBdev3", 00:12:42.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.779 "is_configured": false, 00:12:42.779 "data_offset": 0, 00:12:42.779 "data_size": 0 00:12:42.779 }, 00:12:42.779 { 00:12:42.779 "name": "BaseBdev4", 00:12:42.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.779 "is_configured": false, 00:12:42.779 "data_offset": 0, 00:12:42.779 "data_size": 0 00:12:42.779 } 00:12:42.779 ] 00:12:42.779 }' 00:12:42.779 09:49:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.779 09:49:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.039 [2024-11-27 09:49:44.072211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:43.039 [2024-11-27 09:49:44.072287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.039 [2024-11-27 09:49:44.084257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:43.039 [2024-11-27 09:49:44.086416] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:43.039 [2024-11-27 09:49:44.086459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:43.039 [2024-11-27 09:49:44.086470] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:43.039 [2024-11-27 09:49:44.086481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:43.039 [2024-11-27 09:49:44.086488] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:43.039 [2024-11-27 09:49:44.086496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.039 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.039 "name": "Existed_Raid", 00:12:43.039 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:43.039 "strip_size_kb": 64, 00:12:43.039 "state": "configuring", 00:12:43.039 "raid_level": "concat", 00:12:43.039 "superblock": true, 00:12:43.039 "num_base_bdevs": 4, 00:12:43.039 "num_base_bdevs_discovered": 1, 00:12:43.039 "num_base_bdevs_operational": 4, 00:12:43.039 "base_bdevs_list": [ 00:12:43.039 { 00:12:43.039 "name": "BaseBdev1", 00:12:43.039 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:43.039 "is_configured": true, 00:12:43.040 "data_offset": 2048, 00:12:43.040 "data_size": 63488 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev2", 00:12:43.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.040 "is_configured": false, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 0 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev3", 00:12:43.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.040 "is_configured": false, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 0 00:12:43.040 }, 00:12:43.040 { 00:12:43.040 "name": "BaseBdev4", 00:12:43.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.040 "is_configured": false, 00:12:43.040 "data_offset": 0, 00:12:43.040 "data_size": 0 00:12:43.040 } 00:12:43.040 ] 00:12:43.040 }' 00:12:43.040 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.040 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.608 [2024-11-27 09:49:44.551274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.608 BaseBdev2 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.608 [ 00:12:43.608 { 00:12:43.608 "name": "BaseBdev2", 00:12:43.608 "aliases": [ 00:12:43.608 "58b9c76e-5c54-4f89-b0d8-6903afea4756" 00:12:43.608 ], 00:12:43.608 "product_name": "Malloc disk", 00:12:43.608 "block_size": 512, 00:12:43.608 "num_blocks": 65536, 00:12:43.608 "uuid": "58b9c76e-5c54-4f89-b0d8-6903afea4756", 00:12:43.608 "assigned_rate_limits": { 00:12:43.608 "rw_ios_per_sec": 0, 00:12:43.608 "rw_mbytes_per_sec": 0, 00:12:43.608 "r_mbytes_per_sec": 0, 00:12:43.608 "w_mbytes_per_sec": 0 00:12:43.608 }, 00:12:43.608 "claimed": true, 00:12:43.608 "claim_type": "exclusive_write", 00:12:43.608 "zoned": false, 00:12:43.608 "supported_io_types": { 00:12:43.608 "read": true, 00:12:43.608 "write": true, 00:12:43.608 "unmap": true, 00:12:43.608 "flush": true, 00:12:43.608 "reset": true, 00:12:43.608 "nvme_admin": false, 00:12:43.608 "nvme_io": false, 00:12:43.608 "nvme_io_md": false, 00:12:43.608 "write_zeroes": true, 00:12:43.608 "zcopy": true, 00:12:43.608 "get_zone_info": false, 00:12:43.608 "zone_management": false, 00:12:43.608 "zone_append": false, 00:12:43.608 "compare": false, 00:12:43.608 "compare_and_write": false, 00:12:43.608 "abort": true, 00:12:43.608 "seek_hole": false, 00:12:43.608 "seek_data": false, 00:12:43.608 "copy": true, 00:12:43.608 "nvme_iov_md": false 00:12:43.608 }, 00:12:43.608 "memory_domains": [ 00:12:43.608 { 00:12:43.608 "dma_device_id": "system", 00:12:43.608 "dma_device_type": 1 00:12:43.608 }, 00:12:43.608 { 00:12:43.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.608 "dma_device_type": 2 00:12:43.608 } 00:12:43.608 ], 00:12:43.608 "driver_specific": {} 00:12:43.608 } 00:12:43.608 ] 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.608 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.608 "name": "Existed_Raid", 00:12:43.608 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:43.608 "strip_size_kb": 64, 00:12:43.608 "state": "configuring", 00:12:43.608 "raid_level": "concat", 00:12:43.608 "superblock": true, 00:12:43.608 "num_base_bdevs": 4, 00:12:43.608 "num_base_bdevs_discovered": 2, 00:12:43.608 "num_base_bdevs_operational": 4, 00:12:43.608 "base_bdevs_list": [ 00:12:43.608 { 00:12:43.608 "name": "BaseBdev1", 00:12:43.608 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:43.608 "is_configured": true, 00:12:43.608 "data_offset": 2048, 00:12:43.608 "data_size": 63488 00:12:43.608 }, 00:12:43.608 { 00:12:43.608 "name": "BaseBdev2", 00:12:43.608 "uuid": "58b9c76e-5c54-4f89-b0d8-6903afea4756", 00:12:43.608 "is_configured": true, 00:12:43.608 "data_offset": 2048, 00:12:43.608 "data_size": 63488 00:12:43.608 }, 00:12:43.608 { 00:12:43.608 "name": "BaseBdev3", 00:12:43.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.608 "is_configured": false, 00:12:43.608 "data_offset": 0, 00:12:43.608 "data_size": 0 00:12:43.608 }, 00:12:43.608 { 00:12:43.609 "name": "BaseBdev4", 00:12:43.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.609 "is_configured": false, 00:12:43.609 "data_offset": 0, 00:12:43.609 "data_size": 0 00:12:43.609 } 00:12:43.609 ] 00:12:43.609 }' 00:12:43.609 09:49:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.609 09:49:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.178 [2024-11-27 09:49:45.059692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:44.178 BaseBdev3 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.178 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.178 [ 00:12:44.178 { 00:12:44.178 "name": "BaseBdev3", 00:12:44.178 "aliases": [ 00:12:44.178 "8012ac75-8f77-4003-b259-5e9c6b2d904c" 00:12:44.178 ], 00:12:44.178 "product_name": "Malloc disk", 00:12:44.178 "block_size": 512, 00:12:44.178 "num_blocks": 65536, 00:12:44.178 "uuid": "8012ac75-8f77-4003-b259-5e9c6b2d904c", 00:12:44.178 "assigned_rate_limits": { 00:12:44.178 "rw_ios_per_sec": 0, 00:12:44.178 "rw_mbytes_per_sec": 0, 00:12:44.178 "r_mbytes_per_sec": 0, 00:12:44.178 "w_mbytes_per_sec": 0 00:12:44.178 }, 00:12:44.178 "claimed": true, 00:12:44.178 "claim_type": "exclusive_write", 00:12:44.178 "zoned": false, 00:12:44.178 "supported_io_types": { 00:12:44.178 "read": true, 00:12:44.178 "write": true, 00:12:44.178 "unmap": true, 00:12:44.178 "flush": true, 00:12:44.178 "reset": true, 00:12:44.178 "nvme_admin": false, 00:12:44.178 "nvme_io": false, 00:12:44.178 "nvme_io_md": false, 00:12:44.178 "write_zeroes": true, 00:12:44.179 "zcopy": true, 00:12:44.179 "get_zone_info": false, 00:12:44.179 "zone_management": false, 00:12:44.179 "zone_append": false, 00:12:44.179 "compare": false, 00:12:44.179 "compare_and_write": false, 00:12:44.179 "abort": true, 00:12:44.179 "seek_hole": false, 00:12:44.179 "seek_data": false, 00:12:44.179 "copy": true, 00:12:44.179 "nvme_iov_md": false 00:12:44.179 }, 00:12:44.179 "memory_domains": [ 00:12:44.179 { 00:12:44.179 "dma_device_id": "system", 00:12:44.179 "dma_device_type": 1 00:12:44.179 }, 00:12:44.179 { 00:12:44.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.179 "dma_device_type": 2 00:12:44.179 } 00:12:44.179 ], 00:12:44.179 "driver_specific": {} 00:12:44.179 } 00:12:44.179 ] 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.179 "name": "Existed_Raid", 00:12:44.179 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:44.179 "strip_size_kb": 64, 00:12:44.179 "state": "configuring", 00:12:44.179 "raid_level": "concat", 00:12:44.179 "superblock": true, 00:12:44.179 "num_base_bdevs": 4, 00:12:44.179 "num_base_bdevs_discovered": 3, 00:12:44.179 "num_base_bdevs_operational": 4, 00:12:44.179 "base_bdevs_list": [ 00:12:44.179 { 00:12:44.179 "name": "BaseBdev1", 00:12:44.179 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:44.179 "is_configured": true, 00:12:44.179 "data_offset": 2048, 00:12:44.179 "data_size": 63488 00:12:44.179 }, 00:12:44.179 { 00:12:44.179 "name": "BaseBdev2", 00:12:44.179 "uuid": "58b9c76e-5c54-4f89-b0d8-6903afea4756", 00:12:44.179 "is_configured": true, 00:12:44.179 "data_offset": 2048, 00:12:44.179 "data_size": 63488 00:12:44.179 }, 00:12:44.179 { 00:12:44.179 "name": "BaseBdev3", 00:12:44.179 "uuid": "8012ac75-8f77-4003-b259-5e9c6b2d904c", 00:12:44.179 "is_configured": true, 00:12:44.179 "data_offset": 2048, 00:12:44.179 "data_size": 63488 00:12:44.179 }, 00:12:44.179 { 00:12:44.179 "name": "BaseBdev4", 00:12:44.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.179 "is_configured": false, 00:12:44.179 "data_offset": 0, 00:12:44.179 "data_size": 0 00:12:44.179 } 00:12:44.179 ] 00:12:44.179 }' 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.179 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.439 [2024-11-27 09:49:45.562450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:44.439 [2024-11-27 09:49:45.562755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:44.439 [2024-11-27 09:49:45.562773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:44.439 [2024-11-27 09:49:45.563114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:44.439 [2024-11-27 09:49:45.563294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:44.439 [2024-11-27 09:49:45.563307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:44.439 [2024-11-27 09:49:45.563471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.439 BaseBdev4 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.439 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.699 [ 00:12:44.699 { 00:12:44.699 "name": "BaseBdev4", 00:12:44.699 "aliases": [ 00:12:44.699 "fe3112b9-d45c-4fa5-a134-f501e7661e1d" 00:12:44.699 ], 00:12:44.699 "product_name": "Malloc disk", 00:12:44.699 "block_size": 512, 00:12:44.699 "num_blocks": 65536, 00:12:44.699 "uuid": "fe3112b9-d45c-4fa5-a134-f501e7661e1d", 00:12:44.699 "assigned_rate_limits": { 00:12:44.699 "rw_ios_per_sec": 0, 00:12:44.699 "rw_mbytes_per_sec": 0, 00:12:44.699 "r_mbytes_per_sec": 0, 00:12:44.699 "w_mbytes_per_sec": 0 00:12:44.699 }, 00:12:44.699 "claimed": true, 00:12:44.699 "claim_type": "exclusive_write", 00:12:44.699 "zoned": false, 00:12:44.699 "supported_io_types": { 00:12:44.699 "read": true, 00:12:44.699 "write": true, 00:12:44.699 "unmap": true, 00:12:44.699 "flush": true, 00:12:44.699 "reset": true, 00:12:44.699 "nvme_admin": false, 00:12:44.699 "nvme_io": false, 00:12:44.699 "nvme_io_md": false, 00:12:44.699 "write_zeroes": true, 00:12:44.699 "zcopy": true, 00:12:44.699 "get_zone_info": false, 00:12:44.699 "zone_management": false, 00:12:44.699 "zone_append": false, 00:12:44.699 "compare": false, 00:12:44.699 "compare_and_write": false, 00:12:44.699 "abort": true, 00:12:44.699 "seek_hole": false, 00:12:44.699 "seek_data": false, 00:12:44.699 "copy": true, 00:12:44.699 "nvme_iov_md": false 00:12:44.699 }, 00:12:44.699 "memory_domains": [ 00:12:44.699 { 00:12:44.699 "dma_device_id": "system", 00:12:44.699 "dma_device_type": 1 00:12:44.699 }, 00:12:44.699 { 00:12:44.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.699 "dma_device_type": 2 00:12:44.699 } 00:12:44.699 ], 00:12:44.699 "driver_specific": {} 00:12:44.699 } 00:12:44.699 ] 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.699 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.700 "name": "Existed_Raid", 00:12:44.700 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:44.700 "strip_size_kb": 64, 00:12:44.700 "state": "online", 00:12:44.700 "raid_level": "concat", 00:12:44.700 "superblock": true, 00:12:44.700 "num_base_bdevs": 4, 00:12:44.700 "num_base_bdevs_discovered": 4, 00:12:44.700 "num_base_bdevs_operational": 4, 00:12:44.700 "base_bdevs_list": [ 00:12:44.700 { 00:12:44.700 "name": "BaseBdev1", 00:12:44.700 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:44.700 "is_configured": true, 00:12:44.700 "data_offset": 2048, 00:12:44.700 "data_size": 63488 00:12:44.700 }, 00:12:44.700 { 00:12:44.700 "name": "BaseBdev2", 00:12:44.700 "uuid": "58b9c76e-5c54-4f89-b0d8-6903afea4756", 00:12:44.700 "is_configured": true, 00:12:44.700 "data_offset": 2048, 00:12:44.700 "data_size": 63488 00:12:44.700 }, 00:12:44.700 { 00:12:44.700 "name": "BaseBdev3", 00:12:44.700 "uuid": "8012ac75-8f77-4003-b259-5e9c6b2d904c", 00:12:44.700 "is_configured": true, 00:12:44.700 "data_offset": 2048, 00:12:44.700 "data_size": 63488 00:12:44.700 }, 00:12:44.700 { 00:12:44.700 "name": "BaseBdev4", 00:12:44.700 "uuid": "fe3112b9-d45c-4fa5-a134-f501e7661e1d", 00:12:44.700 "is_configured": true, 00:12:44.700 "data_offset": 2048, 00:12:44.700 "data_size": 63488 00:12:44.700 } 00:12:44.700 ] 00:12:44.700 }' 00:12:44.700 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.700 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.959 09:49:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.959 [2024-11-27 09:49:45.998257] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.959 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.959 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.959 "name": "Existed_Raid", 00:12:44.959 "aliases": [ 00:12:44.959 "c6fdaf18-0e03-445f-8938-d08f2f9fc83b" 00:12:44.959 ], 00:12:44.959 "product_name": "Raid Volume", 00:12:44.959 "block_size": 512, 00:12:44.959 "num_blocks": 253952, 00:12:44.959 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:44.959 "assigned_rate_limits": { 00:12:44.959 "rw_ios_per_sec": 0, 00:12:44.959 "rw_mbytes_per_sec": 0, 00:12:44.959 "r_mbytes_per_sec": 0, 00:12:44.959 "w_mbytes_per_sec": 0 00:12:44.959 }, 00:12:44.959 "claimed": false, 00:12:44.959 "zoned": false, 00:12:44.959 "supported_io_types": { 00:12:44.959 "read": true, 00:12:44.959 "write": true, 00:12:44.959 "unmap": true, 00:12:44.959 "flush": true, 00:12:44.959 "reset": true, 00:12:44.959 "nvme_admin": false, 00:12:44.959 "nvme_io": false, 00:12:44.959 "nvme_io_md": false, 00:12:44.959 "write_zeroes": true, 00:12:44.959 "zcopy": false, 00:12:44.959 "get_zone_info": false, 00:12:44.959 "zone_management": false, 00:12:44.959 "zone_append": false, 00:12:44.959 "compare": false, 00:12:44.959 "compare_and_write": false, 00:12:44.959 "abort": false, 00:12:44.959 "seek_hole": false, 00:12:44.959 "seek_data": false, 00:12:44.959 "copy": false, 00:12:44.959 "nvme_iov_md": false 00:12:44.959 }, 00:12:44.959 "memory_domains": [ 00:12:44.959 { 00:12:44.959 "dma_device_id": "system", 00:12:44.959 "dma_device_type": 1 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.959 "dma_device_type": 2 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "dma_device_id": "system", 00:12:44.959 "dma_device_type": 1 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.959 "dma_device_type": 2 00:12:44.959 }, 00:12:44.959 { 00:12:44.959 "dma_device_id": "system", 00:12:44.959 "dma_device_type": 1 00:12:44.960 }, 00:12:44.960 { 00:12:44.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.960 "dma_device_type": 2 00:12:44.960 }, 00:12:44.960 { 00:12:44.960 "dma_device_id": "system", 00:12:44.960 "dma_device_type": 1 00:12:44.960 }, 00:12:44.960 { 00:12:44.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.960 "dma_device_type": 2 00:12:44.960 } 00:12:44.960 ], 00:12:44.960 "driver_specific": { 00:12:44.960 "raid": { 00:12:44.960 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:44.960 "strip_size_kb": 64, 00:12:44.960 "state": "online", 00:12:44.960 "raid_level": "concat", 00:12:44.960 "superblock": true, 00:12:44.960 "num_base_bdevs": 4, 00:12:44.960 "num_base_bdevs_discovered": 4, 00:12:44.960 "num_base_bdevs_operational": 4, 00:12:44.960 "base_bdevs_list": [ 00:12:44.960 { 00:12:44.960 "name": "BaseBdev1", 00:12:44.960 "uuid": "8e9bb8c7-94e8-4589-91b0-c0e6f2add06b", 00:12:44.960 "is_configured": true, 00:12:44.960 "data_offset": 2048, 00:12:44.960 "data_size": 63488 00:12:44.960 }, 00:12:44.960 { 00:12:44.960 "name": "BaseBdev2", 00:12:44.960 "uuid": "58b9c76e-5c54-4f89-b0d8-6903afea4756", 00:12:44.960 "is_configured": true, 00:12:44.960 "data_offset": 2048, 00:12:44.960 "data_size": 63488 00:12:44.960 }, 00:12:44.960 { 00:12:44.960 "name": "BaseBdev3", 00:12:44.960 "uuid": "8012ac75-8f77-4003-b259-5e9c6b2d904c", 00:12:44.960 "is_configured": true, 00:12:44.960 "data_offset": 2048, 00:12:44.960 "data_size": 63488 00:12:44.960 }, 00:12:44.960 { 00:12:44.960 "name": "BaseBdev4", 00:12:44.960 "uuid": "fe3112b9-d45c-4fa5-a134-f501e7661e1d", 00:12:44.960 "is_configured": true, 00:12:44.960 "data_offset": 2048, 00:12:44.960 "data_size": 63488 00:12:44.960 } 00:12:44.960 ] 00:12:44.960 } 00:12:44.960 } 00:12:44.960 }' 00:12:44.960 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.960 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:44.960 BaseBdev2 00:12:44.960 BaseBdev3 00:12:44.960 BaseBdev4' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.220 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.220 [2024-11-27 09:49:46.337311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.220 [2024-11-27 09:49:46.337402] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:45.220 [2024-11-27 09:49:46.337504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:45.480 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.480 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.481 "name": "Existed_Raid", 00:12:45.481 "uuid": "c6fdaf18-0e03-445f-8938-d08f2f9fc83b", 00:12:45.481 "strip_size_kb": 64, 00:12:45.481 "state": "offline", 00:12:45.481 "raid_level": "concat", 00:12:45.481 "superblock": true, 00:12:45.481 "num_base_bdevs": 4, 00:12:45.481 "num_base_bdevs_discovered": 3, 00:12:45.481 "num_base_bdevs_operational": 3, 00:12:45.481 "base_bdevs_list": [ 00:12:45.481 { 00:12:45.481 "name": null, 00:12:45.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.481 "is_configured": false, 00:12:45.481 "data_offset": 0, 00:12:45.481 "data_size": 63488 00:12:45.481 }, 00:12:45.481 { 00:12:45.481 "name": "BaseBdev2", 00:12:45.481 "uuid": "58b9c76e-5c54-4f89-b0d8-6903afea4756", 00:12:45.481 "is_configured": true, 00:12:45.481 "data_offset": 2048, 00:12:45.481 "data_size": 63488 00:12:45.481 }, 00:12:45.481 { 00:12:45.481 "name": "BaseBdev3", 00:12:45.481 "uuid": "8012ac75-8f77-4003-b259-5e9c6b2d904c", 00:12:45.481 "is_configured": true, 00:12:45.481 "data_offset": 2048, 00:12:45.481 "data_size": 63488 00:12:45.481 }, 00:12:45.481 { 00:12:45.481 "name": "BaseBdev4", 00:12:45.481 "uuid": "fe3112b9-d45c-4fa5-a134-f501e7661e1d", 00:12:45.481 "is_configured": true, 00:12:45.481 "data_offset": 2048, 00:12:45.481 "data_size": 63488 00:12:45.481 } 00:12:45.481 ] 00:12:45.481 }' 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.481 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 09:49:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 [2024-11-27 09:49:46.950348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.051 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.051 [2024-11-27 09:49:47.138067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.311 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.311 [2024-11-27 09:49:47.330503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:46.311 [2024-11-27 09:49:47.330566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.572 BaseBdev2 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.572 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 [ 00:12:46.573 { 00:12:46.573 "name": "BaseBdev2", 00:12:46.573 "aliases": [ 00:12:46.573 "812105fe-374b-4e5a-b99b-46c5a77dd4bc" 00:12:46.573 ], 00:12:46.573 "product_name": "Malloc disk", 00:12:46.573 "block_size": 512, 00:12:46.573 "num_blocks": 65536, 00:12:46.573 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:46.573 "assigned_rate_limits": { 00:12:46.573 "rw_ios_per_sec": 0, 00:12:46.573 "rw_mbytes_per_sec": 0, 00:12:46.573 "r_mbytes_per_sec": 0, 00:12:46.573 "w_mbytes_per_sec": 0 00:12:46.573 }, 00:12:46.573 "claimed": false, 00:12:46.573 "zoned": false, 00:12:46.573 "supported_io_types": { 00:12:46.573 "read": true, 00:12:46.573 "write": true, 00:12:46.573 "unmap": true, 00:12:46.573 "flush": true, 00:12:46.573 "reset": true, 00:12:46.573 "nvme_admin": false, 00:12:46.573 "nvme_io": false, 00:12:46.573 "nvme_io_md": false, 00:12:46.573 "write_zeroes": true, 00:12:46.573 "zcopy": true, 00:12:46.573 "get_zone_info": false, 00:12:46.573 "zone_management": false, 00:12:46.573 "zone_append": false, 00:12:46.573 "compare": false, 00:12:46.573 "compare_and_write": false, 00:12:46.573 "abort": true, 00:12:46.573 "seek_hole": false, 00:12:46.573 "seek_data": false, 00:12:46.573 "copy": true, 00:12:46.573 "nvme_iov_md": false 00:12:46.573 }, 00:12:46.573 "memory_domains": [ 00:12:46.573 { 00:12:46.573 "dma_device_id": "system", 00:12:46.573 "dma_device_type": 1 00:12:46.573 }, 00:12:46.573 { 00:12:46.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.573 "dma_device_type": 2 00:12:46.573 } 00:12:46.573 ], 00:12:46.573 "driver_specific": {} 00:12:46.573 } 00:12:46.573 ] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 BaseBdev3 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.573 [ 00:12:46.573 { 00:12:46.573 "name": "BaseBdev3", 00:12:46.573 "aliases": [ 00:12:46.573 "bae512b6-bb07-4343-a9df-1e6750edf6d5" 00:12:46.573 ], 00:12:46.573 "product_name": "Malloc disk", 00:12:46.573 "block_size": 512, 00:12:46.573 "num_blocks": 65536, 00:12:46.573 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:46.573 "assigned_rate_limits": { 00:12:46.573 "rw_ios_per_sec": 0, 00:12:46.573 "rw_mbytes_per_sec": 0, 00:12:46.573 "r_mbytes_per_sec": 0, 00:12:46.573 "w_mbytes_per_sec": 0 00:12:46.573 }, 00:12:46.573 "claimed": false, 00:12:46.573 "zoned": false, 00:12:46.573 "supported_io_types": { 00:12:46.573 "read": true, 00:12:46.573 "write": true, 00:12:46.573 "unmap": true, 00:12:46.573 "flush": true, 00:12:46.573 "reset": true, 00:12:46.573 "nvme_admin": false, 00:12:46.573 "nvme_io": false, 00:12:46.573 "nvme_io_md": false, 00:12:46.573 "write_zeroes": true, 00:12:46.573 "zcopy": true, 00:12:46.573 "get_zone_info": false, 00:12:46.573 "zone_management": false, 00:12:46.573 "zone_append": false, 00:12:46.573 "compare": false, 00:12:46.573 "compare_and_write": false, 00:12:46.573 "abort": true, 00:12:46.573 "seek_hole": false, 00:12:46.573 "seek_data": false, 00:12:46.573 "copy": true, 00:12:46.573 "nvme_iov_md": false 00:12:46.573 }, 00:12:46.573 "memory_domains": [ 00:12:46.573 { 00:12:46.573 "dma_device_id": "system", 00:12:46.573 "dma_device_type": 1 00:12:46.573 }, 00:12:46.573 { 00:12:46.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.573 "dma_device_type": 2 00:12:46.573 } 00:12:46.573 ], 00:12:46.573 "driver_specific": {} 00:12:46.573 } 00:12:46.573 ] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.573 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.834 BaseBdev4 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.834 [ 00:12:46.834 { 00:12:46.834 "name": "BaseBdev4", 00:12:46.834 "aliases": [ 00:12:46.834 "01d62313-0271-4171-9813-20b5e76c878d" 00:12:46.834 ], 00:12:46.834 "product_name": "Malloc disk", 00:12:46.834 "block_size": 512, 00:12:46.834 "num_blocks": 65536, 00:12:46.834 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:46.834 "assigned_rate_limits": { 00:12:46.834 "rw_ios_per_sec": 0, 00:12:46.834 "rw_mbytes_per_sec": 0, 00:12:46.834 "r_mbytes_per_sec": 0, 00:12:46.834 "w_mbytes_per_sec": 0 00:12:46.834 }, 00:12:46.834 "claimed": false, 00:12:46.834 "zoned": false, 00:12:46.834 "supported_io_types": { 00:12:46.834 "read": true, 00:12:46.834 "write": true, 00:12:46.834 "unmap": true, 00:12:46.834 "flush": true, 00:12:46.834 "reset": true, 00:12:46.834 "nvme_admin": false, 00:12:46.834 "nvme_io": false, 00:12:46.834 "nvme_io_md": false, 00:12:46.834 "write_zeroes": true, 00:12:46.834 "zcopy": true, 00:12:46.834 "get_zone_info": false, 00:12:46.834 "zone_management": false, 00:12:46.834 "zone_append": false, 00:12:46.834 "compare": false, 00:12:46.834 "compare_and_write": false, 00:12:46.834 "abort": true, 00:12:46.834 "seek_hole": false, 00:12:46.834 "seek_data": false, 00:12:46.834 "copy": true, 00:12:46.834 "nvme_iov_md": false 00:12:46.834 }, 00:12:46.834 "memory_domains": [ 00:12:46.834 { 00:12:46.834 "dma_device_id": "system", 00:12:46.834 "dma_device_type": 1 00:12:46.834 }, 00:12:46.834 { 00:12:46.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.834 "dma_device_type": 2 00:12:46.834 } 00:12:46.834 ], 00:12:46.834 "driver_specific": {} 00:12:46.834 } 00:12:46.834 ] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.834 [2024-11-27 09:49:47.804078] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.834 [2024-11-27 09:49:47.804205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.834 [2024-11-27 09:49:47.804364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.834 [2024-11-27 09:49:47.806901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.834 [2024-11-27 09:49:47.807026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.834 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.834 "name": "Existed_Raid", 00:12:46.834 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:46.834 "strip_size_kb": 64, 00:12:46.834 "state": "configuring", 00:12:46.834 "raid_level": "concat", 00:12:46.834 "superblock": true, 00:12:46.834 "num_base_bdevs": 4, 00:12:46.834 "num_base_bdevs_discovered": 3, 00:12:46.834 "num_base_bdevs_operational": 4, 00:12:46.834 "base_bdevs_list": [ 00:12:46.834 { 00:12:46.834 "name": "BaseBdev1", 00:12:46.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.835 "is_configured": false, 00:12:46.835 "data_offset": 0, 00:12:46.835 "data_size": 0 00:12:46.835 }, 00:12:46.835 { 00:12:46.835 "name": "BaseBdev2", 00:12:46.835 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:46.835 "is_configured": true, 00:12:46.835 "data_offset": 2048, 00:12:46.835 "data_size": 63488 00:12:46.835 }, 00:12:46.835 { 00:12:46.835 "name": "BaseBdev3", 00:12:46.835 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:46.835 "is_configured": true, 00:12:46.835 "data_offset": 2048, 00:12:46.835 "data_size": 63488 00:12:46.835 }, 00:12:46.835 { 00:12:46.835 "name": "BaseBdev4", 00:12:46.835 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:46.835 "is_configured": true, 00:12:46.835 "data_offset": 2048, 00:12:46.835 "data_size": 63488 00:12:46.835 } 00:12:46.835 ] 00:12:46.835 }' 00:12:46.835 09:49:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.835 09:49:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 [2024-11-27 09:49:48.267323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.405 "name": "Existed_Raid", 00:12:47.405 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:47.405 "strip_size_kb": 64, 00:12:47.405 "state": "configuring", 00:12:47.405 "raid_level": "concat", 00:12:47.405 "superblock": true, 00:12:47.405 "num_base_bdevs": 4, 00:12:47.405 "num_base_bdevs_discovered": 2, 00:12:47.405 "num_base_bdevs_operational": 4, 00:12:47.405 "base_bdevs_list": [ 00:12:47.405 { 00:12:47.405 "name": "BaseBdev1", 00:12:47.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.405 "is_configured": false, 00:12:47.405 "data_offset": 0, 00:12:47.405 "data_size": 0 00:12:47.405 }, 00:12:47.405 { 00:12:47.405 "name": null, 00:12:47.405 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:47.405 "is_configured": false, 00:12:47.405 "data_offset": 0, 00:12:47.405 "data_size": 63488 00:12:47.405 }, 00:12:47.405 { 00:12:47.405 "name": "BaseBdev3", 00:12:47.405 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:47.405 "is_configured": true, 00:12:47.405 "data_offset": 2048, 00:12:47.405 "data_size": 63488 00:12:47.405 }, 00:12:47.405 { 00:12:47.405 "name": "BaseBdev4", 00:12:47.405 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:47.405 "is_configured": true, 00:12:47.405 "data_offset": 2048, 00:12:47.405 "data_size": 63488 00:12:47.405 } 00:12:47.405 ] 00:12:47.405 }' 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.405 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.665 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.665 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.665 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.665 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.925 [2024-11-27 09:49:48.871579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.925 BaseBdev1 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.925 [ 00:12:47.925 { 00:12:47.925 "name": "BaseBdev1", 00:12:47.925 "aliases": [ 00:12:47.925 "26e99c79-b5c9-4e9a-a1b4-20d3997f9131" 00:12:47.925 ], 00:12:47.925 "product_name": "Malloc disk", 00:12:47.925 "block_size": 512, 00:12:47.925 "num_blocks": 65536, 00:12:47.925 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:47.925 "assigned_rate_limits": { 00:12:47.925 "rw_ios_per_sec": 0, 00:12:47.925 "rw_mbytes_per_sec": 0, 00:12:47.925 "r_mbytes_per_sec": 0, 00:12:47.925 "w_mbytes_per_sec": 0 00:12:47.925 }, 00:12:47.925 "claimed": true, 00:12:47.925 "claim_type": "exclusive_write", 00:12:47.925 "zoned": false, 00:12:47.925 "supported_io_types": { 00:12:47.925 "read": true, 00:12:47.925 "write": true, 00:12:47.925 "unmap": true, 00:12:47.925 "flush": true, 00:12:47.925 "reset": true, 00:12:47.925 "nvme_admin": false, 00:12:47.925 "nvme_io": false, 00:12:47.925 "nvme_io_md": false, 00:12:47.925 "write_zeroes": true, 00:12:47.925 "zcopy": true, 00:12:47.925 "get_zone_info": false, 00:12:47.925 "zone_management": false, 00:12:47.925 "zone_append": false, 00:12:47.925 "compare": false, 00:12:47.925 "compare_and_write": false, 00:12:47.925 "abort": true, 00:12:47.925 "seek_hole": false, 00:12:47.925 "seek_data": false, 00:12:47.925 "copy": true, 00:12:47.925 "nvme_iov_md": false 00:12:47.925 }, 00:12:47.925 "memory_domains": [ 00:12:47.925 { 00:12:47.925 "dma_device_id": "system", 00:12:47.925 "dma_device_type": 1 00:12:47.925 }, 00:12:47.925 { 00:12:47.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.925 "dma_device_type": 2 00:12:47.925 } 00:12:47.925 ], 00:12:47.925 "driver_specific": {} 00:12:47.925 } 00:12:47.925 ] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.925 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.926 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.926 "name": "Existed_Raid", 00:12:47.926 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:47.926 "strip_size_kb": 64, 00:12:47.926 "state": "configuring", 00:12:47.926 "raid_level": "concat", 00:12:47.926 "superblock": true, 00:12:47.926 "num_base_bdevs": 4, 00:12:47.926 "num_base_bdevs_discovered": 3, 00:12:47.926 "num_base_bdevs_operational": 4, 00:12:47.926 "base_bdevs_list": [ 00:12:47.926 { 00:12:47.926 "name": "BaseBdev1", 00:12:47.926 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:47.926 "is_configured": true, 00:12:47.926 "data_offset": 2048, 00:12:47.926 "data_size": 63488 00:12:47.926 }, 00:12:47.926 { 00:12:47.926 "name": null, 00:12:47.926 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:47.926 "is_configured": false, 00:12:47.926 "data_offset": 0, 00:12:47.926 "data_size": 63488 00:12:47.926 }, 00:12:47.926 { 00:12:47.926 "name": "BaseBdev3", 00:12:47.926 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:47.926 "is_configured": true, 00:12:47.926 "data_offset": 2048, 00:12:47.926 "data_size": 63488 00:12:47.926 }, 00:12:47.926 { 00:12:47.926 "name": "BaseBdev4", 00:12:47.926 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:47.926 "is_configured": true, 00:12:47.926 "data_offset": 2048, 00:12:47.926 "data_size": 63488 00:12:47.926 } 00:12:47.926 ] 00:12:47.926 }' 00:12:47.926 09:49:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.926 09:49:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.496 [2024-11-27 09:49:49.462709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.496 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.497 "name": "Existed_Raid", 00:12:48.497 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:48.497 "strip_size_kb": 64, 00:12:48.497 "state": "configuring", 00:12:48.497 "raid_level": "concat", 00:12:48.497 "superblock": true, 00:12:48.497 "num_base_bdevs": 4, 00:12:48.497 "num_base_bdevs_discovered": 2, 00:12:48.497 "num_base_bdevs_operational": 4, 00:12:48.497 "base_bdevs_list": [ 00:12:48.497 { 00:12:48.497 "name": "BaseBdev1", 00:12:48.497 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:48.497 "is_configured": true, 00:12:48.497 "data_offset": 2048, 00:12:48.497 "data_size": 63488 00:12:48.497 }, 00:12:48.497 { 00:12:48.497 "name": null, 00:12:48.497 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:48.497 "is_configured": false, 00:12:48.497 "data_offset": 0, 00:12:48.497 "data_size": 63488 00:12:48.497 }, 00:12:48.497 { 00:12:48.497 "name": null, 00:12:48.497 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:48.497 "is_configured": false, 00:12:48.497 "data_offset": 0, 00:12:48.497 "data_size": 63488 00:12:48.497 }, 00:12:48.497 { 00:12:48.497 "name": "BaseBdev4", 00:12:48.497 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:48.497 "is_configured": true, 00:12:48.497 "data_offset": 2048, 00:12:48.497 "data_size": 63488 00:12:48.497 } 00:12:48.497 ] 00:12:48.497 }' 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.497 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.066 [2024-11-27 09:49:49.989816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.066 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.067 09:49:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.067 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.067 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.067 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.067 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.067 "name": "Existed_Raid", 00:12:49.067 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:49.067 "strip_size_kb": 64, 00:12:49.067 "state": "configuring", 00:12:49.067 "raid_level": "concat", 00:12:49.067 "superblock": true, 00:12:49.067 "num_base_bdevs": 4, 00:12:49.067 "num_base_bdevs_discovered": 3, 00:12:49.067 "num_base_bdevs_operational": 4, 00:12:49.067 "base_bdevs_list": [ 00:12:49.067 { 00:12:49.067 "name": "BaseBdev1", 00:12:49.067 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:49.067 "is_configured": true, 00:12:49.067 "data_offset": 2048, 00:12:49.067 "data_size": 63488 00:12:49.067 }, 00:12:49.067 { 00:12:49.067 "name": null, 00:12:49.067 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:49.067 "is_configured": false, 00:12:49.067 "data_offset": 0, 00:12:49.067 "data_size": 63488 00:12:49.067 }, 00:12:49.067 { 00:12:49.067 "name": "BaseBdev3", 00:12:49.067 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:49.067 "is_configured": true, 00:12:49.067 "data_offset": 2048, 00:12:49.067 "data_size": 63488 00:12:49.067 }, 00:12:49.067 { 00:12:49.067 "name": "BaseBdev4", 00:12:49.067 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:49.067 "is_configured": true, 00:12:49.067 "data_offset": 2048, 00:12:49.067 "data_size": 63488 00:12:49.067 } 00:12:49.067 ] 00:12:49.067 }' 00:12:49.067 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.067 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.637 [2024-11-27 09:49:50.497054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.637 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.637 "name": "Existed_Raid", 00:12:49.637 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:49.637 "strip_size_kb": 64, 00:12:49.637 "state": "configuring", 00:12:49.637 "raid_level": "concat", 00:12:49.637 "superblock": true, 00:12:49.637 "num_base_bdevs": 4, 00:12:49.637 "num_base_bdevs_discovered": 2, 00:12:49.637 "num_base_bdevs_operational": 4, 00:12:49.638 "base_bdevs_list": [ 00:12:49.638 { 00:12:49.638 "name": null, 00:12:49.638 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:49.638 "is_configured": false, 00:12:49.638 "data_offset": 0, 00:12:49.638 "data_size": 63488 00:12:49.638 }, 00:12:49.638 { 00:12:49.638 "name": null, 00:12:49.638 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:49.638 "is_configured": false, 00:12:49.638 "data_offset": 0, 00:12:49.638 "data_size": 63488 00:12:49.638 }, 00:12:49.638 { 00:12:49.638 "name": "BaseBdev3", 00:12:49.638 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:49.638 "is_configured": true, 00:12:49.638 "data_offset": 2048, 00:12:49.638 "data_size": 63488 00:12:49.638 }, 00:12:49.638 { 00:12:49.638 "name": "BaseBdev4", 00:12:49.638 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:49.638 "is_configured": true, 00:12:49.638 "data_offset": 2048, 00:12:49.638 "data_size": 63488 00:12:49.638 } 00:12:49.638 ] 00:12:49.638 }' 00:12:49.638 09:49:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.638 09:49:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.209 [2024-11-27 09:49:51.139970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.209 "name": "Existed_Raid", 00:12:50.209 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:50.209 "strip_size_kb": 64, 00:12:50.209 "state": "configuring", 00:12:50.209 "raid_level": "concat", 00:12:50.209 "superblock": true, 00:12:50.209 "num_base_bdevs": 4, 00:12:50.209 "num_base_bdevs_discovered": 3, 00:12:50.209 "num_base_bdevs_operational": 4, 00:12:50.209 "base_bdevs_list": [ 00:12:50.209 { 00:12:50.209 "name": null, 00:12:50.209 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:50.209 "is_configured": false, 00:12:50.209 "data_offset": 0, 00:12:50.209 "data_size": 63488 00:12:50.209 }, 00:12:50.209 { 00:12:50.209 "name": "BaseBdev2", 00:12:50.209 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:50.209 "is_configured": true, 00:12:50.209 "data_offset": 2048, 00:12:50.209 "data_size": 63488 00:12:50.209 }, 00:12:50.209 { 00:12:50.209 "name": "BaseBdev3", 00:12:50.209 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:50.209 "is_configured": true, 00:12:50.209 "data_offset": 2048, 00:12:50.209 "data_size": 63488 00:12:50.209 }, 00:12:50.209 { 00:12:50.209 "name": "BaseBdev4", 00:12:50.209 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:50.209 "is_configured": true, 00:12:50.209 "data_offset": 2048, 00:12:50.209 "data_size": 63488 00:12:50.209 } 00:12:50.209 ] 00:12:50.209 }' 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.209 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.470 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 26e99c79-b5c9-4e9a-a1b4-20d3997f9131 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.730 [2024-11-27 09:49:51.656902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:50.730 [2024-11-27 09:49:51.657220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:50.730 [2024-11-27 09:49:51.657236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:50.730 [2024-11-27 09:49:51.657552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:50.730 [2024-11-27 09:49:51.657711] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:50.730 [2024-11-27 09:49:51.657723] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:50.730 NewBaseBdev 00:12:50.730 [2024-11-27 09:49:51.657869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.730 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.731 [ 00:12:50.731 { 00:12:50.731 "name": "NewBaseBdev", 00:12:50.731 "aliases": [ 00:12:50.731 "26e99c79-b5c9-4e9a-a1b4-20d3997f9131" 00:12:50.731 ], 00:12:50.731 "product_name": "Malloc disk", 00:12:50.731 "block_size": 512, 00:12:50.731 "num_blocks": 65536, 00:12:50.731 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:50.731 "assigned_rate_limits": { 00:12:50.731 "rw_ios_per_sec": 0, 00:12:50.731 "rw_mbytes_per_sec": 0, 00:12:50.731 "r_mbytes_per_sec": 0, 00:12:50.731 "w_mbytes_per_sec": 0 00:12:50.731 }, 00:12:50.731 "claimed": true, 00:12:50.731 "claim_type": "exclusive_write", 00:12:50.731 "zoned": false, 00:12:50.731 "supported_io_types": { 00:12:50.731 "read": true, 00:12:50.731 "write": true, 00:12:50.731 "unmap": true, 00:12:50.731 "flush": true, 00:12:50.731 "reset": true, 00:12:50.731 "nvme_admin": false, 00:12:50.731 "nvme_io": false, 00:12:50.731 "nvme_io_md": false, 00:12:50.731 "write_zeroes": true, 00:12:50.731 "zcopy": true, 00:12:50.731 "get_zone_info": false, 00:12:50.731 "zone_management": false, 00:12:50.731 "zone_append": false, 00:12:50.731 "compare": false, 00:12:50.731 "compare_and_write": false, 00:12:50.731 "abort": true, 00:12:50.731 "seek_hole": false, 00:12:50.731 "seek_data": false, 00:12:50.731 "copy": true, 00:12:50.731 "nvme_iov_md": false 00:12:50.731 }, 00:12:50.731 "memory_domains": [ 00:12:50.731 { 00:12:50.731 "dma_device_id": "system", 00:12:50.731 "dma_device_type": 1 00:12:50.731 }, 00:12:50.731 { 00:12:50.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.731 "dma_device_type": 2 00:12:50.731 } 00:12:50.731 ], 00:12:50.731 "driver_specific": {} 00:12:50.731 } 00:12:50.731 ] 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.731 "name": "Existed_Raid", 00:12:50.731 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:50.731 "strip_size_kb": 64, 00:12:50.731 "state": "online", 00:12:50.731 "raid_level": "concat", 00:12:50.731 "superblock": true, 00:12:50.731 "num_base_bdevs": 4, 00:12:50.731 "num_base_bdevs_discovered": 4, 00:12:50.731 "num_base_bdevs_operational": 4, 00:12:50.731 "base_bdevs_list": [ 00:12:50.731 { 00:12:50.731 "name": "NewBaseBdev", 00:12:50.731 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:50.731 "is_configured": true, 00:12:50.731 "data_offset": 2048, 00:12:50.731 "data_size": 63488 00:12:50.731 }, 00:12:50.731 { 00:12:50.731 "name": "BaseBdev2", 00:12:50.731 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:50.731 "is_configured": true, 00:12:50.731 "data_offset": 2048, 00:12:50.731 "data_size": 63488 00:12:50.731 }, 00:12:50.731 { 00:12:50.731 "name": "BaseBdev3", 00:12:50.731 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:50.731 "is_configured": true, 00:12:50.731 "data_offset": 2048, 00:12:50.731 "data_size": 63488 00:12:50.731 }, 00:12:50.731 { 00:12:50.731 "name": "BaseBdev4", 00:12:50.731 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:50.731 "is_configured": true, 00:12:50.731 "data_offset": 2048, 00:12:50.731 "data_size": 63488 00:12:50.731 } 00:12:50.731 ] 00:12:50.731 }' 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.731 09:49:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.301 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:51.301 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:51.301 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:51.301 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:51.301 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.302 [2024-11-27 09:49:52.168549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:51.302 "name": "Existed_Raid", 00:12:51.302 "aliases": [ 00:12:51.302 "b061c58d-5d3b-4d39-9658-78cb07e37770" 00:12:51.302 ], 00:12:51.302 "product_name": "Raid Volume", 00:12:51.302 "block_size": 512, 00:12:51.302 "num_blocks": 253952, 00:12:51.302 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:51.302 "assigned_rate_limits": { 00:12:51.302 "rw_ios_per_sec": 0, 00:12:51.302 "rw_mbytes_per_sec": 0, 00:12:51.302 "r_mbytes_per_sec": 0, 00:12:51.302 "w_mbytes_per_sec": 0 00:12:51.302 }, 00:12:51.302 "claimed": false, 00:12:51.302 "zoned": false, 00:12:51.302 "supported_io_types": { 00:12:51.302 "read": true, 00:12:51.302 "write": true, 00:12:51.302 "unmap": true, 00:12:51.302 "flush": true, 00:12:51.302 "reset": true, 00:12:51.302 "nvme_admin": false, 00:12:51.302 "nvme_io": false, 00:12:51.302 "nvme_io_md": false, 00:12:51.302 "write_zeroes": true, 00:12:51.302 "zcopy": false, 00:12:51.302 "get_zone_info": false, 00:12:51.302 "zone_management": false, 00:12:51.302 "zone_append": false, 00:12:51.302 "compare": false, 00:12:51.302 "compare_and_write": false, 00:12:51.302 "abort": false, 00:12:51.302 "seek_hole": false, 00:12:51.302 "seek_data": false, 00:12:51.302 "copy": false, 00:12:51.302 "nvme_iov_md": false 00:12:51.302 }, 00:12:51.302 "memory_domains": [ 00:12:51.302 { 00:12:51.302 "dma_device_id": "system", 00:12:51.302 "dma_device_type": 1 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.302 "dma_device_type": 2 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "system", 00:12:51.302 "dma_device_type": 1 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.302 "dma_device_type": 2 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "system", 00:12:51.302 "dma_device_type": 1 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.302 "dma_device_type": 2 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "system", 00:12:51.302 "dma_device_type": 1 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.302 "dma_device_type": 2 00:12:51.302 } 00:12:51.302 ], 00:12:51.302 "driver_specific": { 00:12:51.302 "raid": { 00:12:51.302 "uuid": "b061c58d-5d3b-4d39-9658-78cb07e37770", 00:12:51.302 "strip_size_kb": 64, 00:12:51.302 "state": "online", 00:12:51.302 "raid_level": "concat", 00:12:51.302 "superblock": true, 00:12:51.302 "num_base_bdevs": 4, 00:12:51.302 "num_base_bdevs_discovered": 4, 00:12:51.302 "num_base_bdevs_operational": 4, 00:12:51.302 "base_bdevs_list": [ 00:12:51.302 { 00:12:51.302 "name": "NewBaseBdev", 00:12:51.302 "uuid": "26e99c79-b5c9-4e9a-a1b4-20d3997f9131", 00:12:51.302 "is_configured": true, 00:12:51.302 "data_offset": 2048, 00:12:51.302 "data_size": 63488 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "name": "BaseBdev2", 00:12:51.302 "uuid": "812105fe-374b-4e5a-b99b-46c5a77dd4bc", 00:12:51.302 "is_configured": true, 00:12:51.302 "data_offset": 2048, 00:12:51.302 "data_size": 63488 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "name": "BaseBdev3", 00:12:51.302 "uuid": "bae512b6-bb07-4343-a9df-1e6750edf6d5", 00:12:51.302 "is_configured": true, 00:12:51.302 "data_offset": 2048, 00:12:51.302 "data_size": 63488 00:12:51.302 }, 00:12:51.302 { 00:12:51.302 "name": "BaseBdev4", 00:12:51.302 "uuid": "01d62313-0271-4171-9813-20b5e76c878d", 00:12:51.302 "is_configured": true, 00:12:51.302 "data_offset": 2048, 00:12:51.302 "data_size": 63488 00:12:51.302 } 00:12:51.302 ] 00:12:51.302 } 00:12:51.302 } 00:12:51.302 }' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:51.302 BaseBdev2 00:12:51.302 BaseBdev3 00:12:51.302 BaseBdev4' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.302 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.577 [2024-11-27 09:49:52.515572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.577 [2024-11-27 09:49:52.515608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.577 [2024-11-27 09:49:52.515694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.577 [2024-11-27 09:49:52.515776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.577 [2024-11-27 09:49:52.515786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72224 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72224 ']' 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72224 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72224 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72224' 00:12:51.577 killing process with pid 72224 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72224 00:12:51.577 [2024-11-27 09:49:52.563418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:51.577 09:49:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72224 00:12:51.857 [2024-11-27 09:49:52.968717] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:53.238 09:49:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:53.238 00:12:53.238 real 0m12.136s 00:12:53.239 user 0m18.970s 00:12:53.239 sys 0m2.327s 00:12:53.239 09:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.239 09:49:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 ************************************ 00:12:53.239 END TEST raid_state_function_test_sb 00:12:53.239 ************************************ 00:12:53.239 09:49:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:53.239 09:49:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:53.239 09:49:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.239 09:49:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 ************************************ 00:12:53.239 START TEST raid_superblock_test 00:12:53.239 ************************************ 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72904 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72904 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72904 ']' 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:53.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:53.239 09:49:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.239 [2024-11-27 09:49:54.326989] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:53.239 [2024-11-27 09:49:54.327239] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72904 ] 00:12:53.499 [2024-11-27 09:49:54.507961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.758 [2024-11-27 09:49:54.645360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.758 [2024-11-27 09:49:54.873506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:53.758 [2024-11-27 09:49:54.873695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.327 malloc1 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.327 [2024-11-27 09:49:55.213078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:54.327 [2024-11-27 09:49:55.213194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.327 [2024-11-27 09:49:55.213226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:54.327 [2024-11-27 09:49:55.213236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.327 [2024-11-27 09:49:55.215736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.327 [2024-11-27 09:49:55.215775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:54.327 pt1 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.327 malloc2 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.327 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.327 [2024-11-27 09:49:55.275414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:54.327 [2024-11-27 09:49:55.275522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.327 [2024-11-27 09:49:55.275586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:54.328 [2024-11-27 09:49:55.275628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.328 [2024-11-27 09:49:55.278202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.328 [2024-11-27 09:49:55.278272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:54.328 pt2 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.328 malloc3 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.328 [2024-11-27 09:49:55.357287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:54.328 [2024-11-27 09:49:55.357385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.328 [2024-11-27 09:49:55.357429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:54.328 [2024-11-27 09:49:55.357459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.328 [2024-11-27 09:49:55.359945] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.328 [2024-11-27 09:49:55.360029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:54.328 pt3 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.328 malloc4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.328 [2024-11-27 09:49:55.424191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:54.328 [2024-11-27 09:49:55.424278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.328 [2024-11-27 09:49:55.424305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:54.328 [2024-11-27 09:49:55.424314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.328 [2024-11-27 09:49:55.426744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.328 [2024-11-27 09:49:55.426780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:54.328 pt4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.328 [2024-11-27 09:49:55.436207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:54.328 [2024-11-27 09:49:55.438356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:54.328 [2024-11-27 09:49:55.438447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:54.328 [2024-11-27 09:49:55.438498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:54.328 [2024-11-27 09:49:55.438694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:54.328 [2024-11-27 09:49:55.438705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:54.328 [2024-11-27 09:49:55.438962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:54.328 [2024-11-27 09:49:55.439146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:54.328 [2024-11-27 09:49:55.439160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:54.328 [2024-11-27 09:49:55.439301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.328 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.588 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.588 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.588 "name": "raid_bdev1", 00:12:54.588 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:54.588 "strip_size_kb": 64, 00:12:54.588 "state": "online", 00:12:54.588 "raid_level": "concat", 00:12:54.588 "superblock": true, 00:12:54.588 "num_base_bdevs": 4, 00:12:54.588 "num_base_bdevs_discovered": 4, 00:12:54.588 "num_base_bdevs_operational": 4, 00:12:54.588 "base_bdevs_list": [ 00:12:54.588 { 00:12:54.588 "name": "pt1", 00:12:54.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.588 "is_configured": true, 00:12:54.588 "data_offset": 2048, 00:12:54.588 "data_size": 63488 00:12:54.588 }, 00:12:54.588 { 00:12:54.588 "name": "pt2", 00:12:54.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.588 "is_configured": true, 00:12:54.588 "data_offset": 2048, 00:12:54.588 "data_size": 63488 00:12:54.588 }, 00:12:54.588 { 00:12:54.588 "name": "pt3", 00:12:54.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.588 "is_configured": true, 00:12:54.588 "data_offset": 2048, 00:12:54.588 "data_size": 63488 00:12:54.588 }, 00:12:54.588 { 00:12:54.588 "name": "pt4", 00:12:54.588 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:54.588 "is_configured": true, 00:12:54.588 "data_offset": 2048, 00:12:54.588 "data_size": 63488 00:12:54.588 } 00:12:54.588 ] 00:12:54.588 }' 00:12:54.588 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.588 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.849 [2024-11-27 09:49:55.871809] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.849 "name": "raid_bdev1", 00:12:54.849 "aliases": [ 00:12:54.849 "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc" 00:12:54.849 ], 00:12:54.849 "product_name": "Raid Volume", 00:12:54.849 "block_size": 512, 00:12:54.849 "num_blocks": 253952, 00:12:54.849 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:54.849 "assigned_rate_limits": { 00:12:54.849 "rw_ios_per_sec": 0, 00:12:54.849 "rw_mbytes_per_sec": 0, 00:12:54.849 "r_mbytes_per_sec": 0, 00:12:54.849 "w_mbytes_per_sec": 0 00:12:54.849 }, 00:12:54.849 "claimed": false, 00:12:54.849 "zoned": false, 00:12:54.849 "supported_io_types": { 00:12:54.849 "read": true, 00:12:54.849 "write": true, 00:12:54.849 "unmap": true, 00:12:54.849 "flush": true, 00:12:54.849 "reset": true, 00:12:54.849 "nvme_admin": false, 00:12:54.849 "nvme_io": false, 00:12:54.849 "nvme_io_md": false, 00:12:54.849 "write_zeroes": true, 00:12:54.849 "zcopy": false, 00:12:54.849 "get_zone_info": false, 00:12:54.849 "zone_management": false, 00:12:54.849 "zone_append": false, 00:12:54.849 "compare": false, 00:12:54.849 "compare_and_write": false, 00:12:54.849 "abort": false, 00:12:54.849 "seek_hole": false, 00:12:54.849 "seek_data": false, 00:12:54.849 "copy": false, 00:12:54.849 "nvme_iov_md": false 00:12:54.849 }, 00:12:54.849 "memory_domains": [ 00:12:54.849 { 00:12:54.849 "dma_device_id": "system", 00:12:54.849 "dma_device_type": 1 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.849 "dma_device_type": 2 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "system", 00:12:54.849 "dma_device_type": 1 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.849 "dma_device_type": 2 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "system", 00:12:54.849 "dma_device_type": 1 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.849 "dma_device_type": 2 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "system", 00:12:54.849 "dma_device_type": 1 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.849 "dma_device_type": 2 00:12:54.849 } 00:12:54.849 ], 00:12:54.849 "driver_specific": { 00:12:54.849 "raid": { 00:12:54.849 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:54.849 "strip_size_kb": 64, 00:12:54.849 "state": "online", 00:12:54.849 "raid_level": "concat", 00:12:54.849 "superblock": true, 00:12:54.849 "num_base_bdevs": 4, 00:12:54.849 "num_base_bdevs_discovered": 4, 00:12:54.849 "num_base_bdevs_operational": 4, 00:12:54.849 "base_bdevs_list": [ 00:12:54.849 { 00:12:54.849 "name": "pt1", 00:12:54.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:54.849 "is_configured": true, 00:12:54.849 "data_offset": 2048, 00:12:54.849 "data_size": 63488 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "name": "pt2", 00:12:54.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:54.849 "is_configured": true, 00:12:54.849 "data_offset": 2048, 00:12:54.849 "data_size": 63488 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "name": "pt3", 00:12:54.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:54.849 "is_configured": true, 00:12:54.849 "data_offset": 2048, 00:12:54.849 "data_size": 63488 00:12:54.849 }, 00:12:54.849 { 00:12:54.849 "name": "pt4", 00:12:54.849 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:54.849 "is_configured": true, 00:12:54.849 "data_offset": 2048, 00:12:54.849 "data_size": 63488 00:12:54.849 } 00:12:54.849 ] 00:12:54.849 } 00:12:54.849 } 00:12:54.849 }' 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:54.849 pt2 00:12:54.849 pt3 00:12:54.849 pt4' 00:12:54.849 09:49:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.109 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.109 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.109 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:55.109 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.110 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.110 [2024-11-27 09:49:56.223139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc ']' 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.370 [2024-11-27 09:49:56.270744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.370 [2024-11-27 09:49:56.270813] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.370 [2024-11-27 09:49:56.270922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.370 [2024-11-27 09:49:56.271032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.370 [2024-11-27 09:49:56.271083] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.370 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 [2024-11-27 09:49:56.438508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:55.371 [2024-11-27 09:49:56.440937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:55.371 [2024-11-27 09:49:56.441012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:55.371 [2024-11-27 09:49:56.441056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:55.371 [2024-11-27 09:49:56.441116] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:55.371 [2024-11-27 09:49:56.441177] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:55.371 [2024-11-27 09:49:56.441205] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:55.371 [2024-11-27 09:49:56.441226] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:55.371 [2024-11-27 09:49:56.441242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:55.371 [2024-11-27 09:49:56.441256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:55.371 request: 00:12:55.371 { 00:12:55.371 "name": "raid_bdev1", 00:12:55.371 "raid_level": "concat", 00:12:55.371 "base_bdevs": [ 00:12:55.371 "malloc1", 00:12:55.371 "malloc2", 00:12:55.371 "malloc3", 00:12:55.371 "malloc4" 00:12:55.371 ], 00:12:55.371 "strip_size_kb": 64, 00:12:55.371 "superblock": false, 00:12:55.371 "method": "bdev_raid_create", 00:12:55.371 "req_id": 1 00:12:55.371 } 00:12:55.371 Got JSON-RPC error response 00:12:55.371 response: 00:12:55.371 { 00:12:55.371 "code": -17, 00:12:55.371 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:55.371 } 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.371 [2024-11-27 09:49:56.494348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:55.371 [2024-11-27 09:49:56.494474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.371 [2024-11-27 09:49:56.494517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:55.371 [2024-11-27 09:49:56.494574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.371 [2024-11-27 09:49:56.497451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.371 [2024-11-27 09:49:56.497532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:55.371 [2024-11-27 09:49:56.497651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:55.371 [2024-11-27 09:49:56.497739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:55.371 pt1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.371 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.631 "name": "raid_bdev1", 00:12:55.631 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:55.631 "strip_size_kb": 64, 00:12:55.631 "state": "configuring", 00:12:55.631 "raid_level": "concat", 00:12:55.631 "superblock": true, 00:12:55.631 "num_base_bdevs": 4, 00:12:55.631 "num_base_bdevs_discovered": 1, 00:12:55.631 "num_base_bdevs_operational": 4, 00:12:55.631 "base_bdevs_list": [ 00:12:55.631 { 00:12:55.631 "name": "pt1", 00:12:55.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.631 "is_configured": true, 00:12:55.631 "data_offset": 2048, 00:12:55.631 "data_size": 63488 00:12:55.631 }, 00:12:55.631 { 00:12:55.631 "name": null, 00:12:55.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.631 "is_configured": false, 00:12:55.631 "data_offset": 2048, 00:12:55.631 "data_size": 63488 00:12:55.631 }, 00:12:55.631 { 00:12:55.631 "name": null, 00:12:55.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.631 "is_configured": false, 00:12:55.631 "data_offset": 2048, 00:12:55.631 "data_size": 63488 00:12:55.631 }, 00:12:55.631 { 00:12:55.631 "name": null, 00:12:55.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.631 "is_configured": false, 00:12:55.631 "data_offset": 2048, 00:12:55.631 "data_size": 63488 00:12:55.631 } 00:12:55.631 ] 00:12:55.631 }' 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.631 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.893 [2024-11-27 09:49:56.929664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:55.893 [2024-11-27 09:49:56.929754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.893 [2024-11-27 09:49:56.929780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:55.893 [2024-11-27 09:49:56.929793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.893 [2024-11-27 09:49:56.930348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.893 [2024-11-27 09:49:56.930371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:55.893 [2024-11-27 09:49:56.930470] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:55.893 [2024-11-27 09:49:56.930502] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:55.893 pt2 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.893 [2024-11-27 09:49:56.937637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.893 "name": "raid_bdev1", 00:12:55.893 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:55.893 "strip_size_kb": 64, 00:12:55.893 "state": "configuring", 00:12:55.893 "raid_level": "concat", 00:12:55.893 "superblock": true, 00:12:55.893 "num_base_bdevs": 4, 00:12:55.893 "num_base_bdevs_discovered": 1, 00:12:55.893 "num_base_bdevs_operational": 4, 00:12:55.893 "base_bdevs_list": [ 00:12:55.893 { 00:12:55.893 "name": "pt1", 00:12:55.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:55.893 "is_configured": true, 00:12:55.893 "data_offset": 2048, 00:12:55.893 "data_size": 63488 00:12:55.893 }, 00:12:55.893 { 00:12:55.893 "name": null, 00:12:55.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:55.893 "is_configured": false, 00:12:55.893 "data_offset": 0, 00:12:55.893 "data_size": 63488 00:12:55.893 }, 00:12:55.893 { 00:12:55.893 "name": null, 00:12:55.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:55.893 "is_configured": false, 00:12:55.893 "data_offset": 2048, 00:12:55.893 "data_size": 63488 00:12:55.893 }, 00:12:55.893 { 00:12:55.893 "name": null, 00:12:55.893 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:55.893 "is_configured": false, 00:12:55.893 "data_offset": 2048, 00:12:55.893 "data_size": 63488 00:12:55.893 } 00:12:55.893 ] 00:12:55.893 }' 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.893 09:49:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.464 [2024-11-27 09:49:57.416888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:56.464 [2024-11-27 09:49:57.417052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.464 [2024-11-27 09:49:57.417120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:56.464 [2024-11-27 09:49:57.417174] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.464 [2024-11-27 09:49:57.417789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.464 [2024-11-27 09:49:57.417817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:56.464 [2024-11-27 09:49:57.417925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:56.464 [2024-11-27 09:49:57.417953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:56.464 pt2 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.464 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.464 [2024-11-27 09:49:57.428814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:56.464 [2024-11-27 09:49:57.428914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.464 [2024-11-27 09:49:57.428939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:56.464 [2024-11-27 09:49:57.428947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.464 [2024-11-27 09:49:57.429457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.464 [2024-11-27 09:49:57.429475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:56.464 [2024-11-27 09:49:57.429550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:56.464 [2024-11-27 09:49:57.429578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:56.464 pt3 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.465 [2024-11-27 09:49:57.440772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:56.465 [2024-11-27 09:49:57.440829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.465 [2024-11-27 09:49:57.440851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:56.465 [2024-11-27 09:49:57.440860] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.465 [2024-11-27 09:49:57.441386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.465 [2024-11-27 09:49:57.441423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:56.465 [2024-11-27 09:49:57.441510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:56.465 [2024-11-27 09:49:57.441535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:56.465 [2024-11-27 09:49:57.441697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:56.465 [2024-11-27 09:49:57.441706] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:56.465 [2024-11-27 09:49:57.441988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:56.465 [2024-11-27 09:49:57.442174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:56.465 [2024-11-27 09:49:57.442190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:56.465 [2024-11-27 09:49:57.442347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.465 pt4 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.465 "name": "raid_bdev1", 00:12:56.465 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:56.465 "strip_size_kb": 64, 00:12:56.465 "state": "online", 00:12:56.465 "raid_level": "concat", 00:12:56.465 "superblock": true, 00:12:56.465 "num_base_bdevs": 4, 00:12:56.465 "num_base_bdevs_discovered": 4, 00:12:56.465 "num_base_bdevs_operational": 4, 00:12:56.465 "base_bdevs_list": [ 00:12:56.465 { 00:12:56.465 "name": "pt1", 00:12:56.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:56.465 "is_configured": true, 00:12:56.465 "data_offset": 2048, 00:12:56.465 "data_size": 63488 00:12:56.465 }, 00:12:56.465 { 00:12:56.465 "name": "pt2", 00:12:56.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:56.465 "is_configured": true, 00:12:56.465 "data_offset": 2048, 00:12:56.465 "data_size": 63488 00:12:56.465 }, 00:12:56.465 { 00:12:56.465 "name": "pt3", 00:12:56.465 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:56.465 "is_configured": true, 00:12:56.465 "data_offset": 2048, 00:12:56.465 "data_size": 63488 00:12:56.465 }, 00:12:56.465 { 00:12:56.465 "name": "pt4", 00:12:56.465 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:56.465 "is_configured": true, 00:12:56.465 "data_offset": 2048, 00:12:56.465 "data_size": 63488 00:12:56.465 } 00:12:56.465 ] 00:12:56.465 }' 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.465 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.035 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.035 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 [2024-11-27 09:49:57.896470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.036 "name": "raid_bdev1", 00:12:57.036 "aliases": [ 00:12:57.036 "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc" 00:12:57.036 ], 00:12:57.036 "product_name": "Raid Volume", 00:12:57.036 "block_size": 512, 00:12:57.036 "num_blocks": 253952, 00:12:57.036 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:57.036 "assigned_rate_limits": { 00:12:57.036 "rw_ios_per_sec": 0, 00:12:57.036 "rw_mbytes_per_sec": 0, 00:12:57.036 "r_mbytes_per_sec": 0, 00:12:57.036 "w_mbytes_per_sec": 0 00:12:57.036 }, 00:12:57.036 "claimed": false, 00:12:57.036 "zoned": false, 00:12:57.036 "supported_io_types": { 00:12:57.036 "read": true, 00:12:57.036 "write": true, 00:12:57.036 "unmap": true, 00:12:57.036 "flush": true, 00:12:57.036 "reset": true, 00:12:57.036 "nvme_admin": false, 00:12:57.036 "nvme_io": false, 00:12:57.036 "nvme_io_md": false, 00:12:57.036 "write_zeroes": true, 00:12:57.036 "zcopy": false, 00:12:57.036 "get_zone_info": false, 00:12:57.036 "zone_management": false, 00:12:57.036 "zone_append": false, 00:12:57.036 "compare": false, 00:12:57.036 "compare_and_write": false, 00:12:57.036 "abort": false, 00:12:57.036 "seek_hole": false, 00:12:57.036 "seek_data": false, 00:12:57.036 "copy": false, 00:12:57.036 "nvme_iov_md": false 00:12:57.036 }, 00:12:57.036 "memory_domains": [ 00:12:57.036 { 00:12:57.036 "dma_device_id": "system", 00:12:57.036 "dma_device_type": 1 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.036 "dma_device_type": 2 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "system", 00:12:57.036 "dma_device_type": 1 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.036 "dma_device_type": 2 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "system", 00:12:57.036 "dma_device_type": 1 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.036 "dma_device_type": 2 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "system", 00:12:57.036 "dma_device_type": 1 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.036 "dma_device_type": 2 00:12:57.036 } 00:12:57.036 ], 00:12:57.036 "driver_specific": { 00:12:57.036 "raid": { 00:12:57.036 "uuid": "a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc", 00:12:57.036 "strip_size_kb": 64, 00:12:57.036 "state": "online", 00:12:57.036 "raid_level": "concat", 00:12:57.036 "superblock": true, 00:12:57.036 "num_base_bdevs": 4, 00:12:57.036 "num_base_bdevs_discovered": 4, 00:12:57.036 "num_base_bdevs_operational": 4, 00:12:57.036 "base_bdevs_list": [ 00:12:57.036 { 00:12:57.036 "name": "pt1", 00:12:57.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.036 "is_configured": true, 00:12:57.036 "data_offset": 2048, 00:12:57.036 "data_size": 63488 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "name": "pt2", 00:12:57.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.036 "is_configured": true, 00:12:57.036 "data_offset": 2048, 00:12:57.036 "data_size": 63488 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "name": "pt3", 00:12:57.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.036 "is_configured": true, 00:12:57.036 "data_offset": 2048, 00:12:57.036 "data_size": 63488 00:12:57.036 }, 00:12:57.036 { 00:12:57.036 "name": "pt4", 00:12:57.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:57.036 "is_configured": true, 00:12:57.036 "data_offset": 2048, 00:12:57.036 "data_size": 63488 00:12:57.036 } 00:12:57.036 ] 00:12:57.036 } 00:12:57.036 } 00:12:57.036 }' 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.036 pt2 00:12:57.036 pt3 00:12:57.036 pt4' 00:12:57.036 09:49:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.036 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:57.296 [2024-11-27 09:49:58.227778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc '!=' a22f2ae2-cd67-4cd7-848b-3adae0c4a2dc ']' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72904 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72904 ']' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72904 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72904 00:12:57.296 killing process with pid 72904 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72904' 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72904 00:12:57.296 [2024-11-27 09:49:58.296327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:57.296 [2024-11-27 09:49:58.296426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:57.296 [2024-11-27 09:49:58.296507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:57.296 [2024-11-27 09:49:58.296516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:57.296 09:49:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72904 00:12:57.866 [2024-11-27 09:49:58.740823] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.248 09:49:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:59.248 00:12:59.248 real 0m5.720s 00:12:59.248 user 0m8.001s 00:12:59.248 sys 0m1.099s 00:12:59.248 09:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.248 09:49:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 ************************************ 00:12:59.248 END TEST raid_superblock_test 00:12:59.248 ************************************ 00:12:59.248 09:50:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:59.248 09:50:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:59.248 09:50:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.248 09:50:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.248 ************************************ 00:12:59.248 START TEST raid_read_error_test 00:12:59.248 ************************************ 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:59.248 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.L9tgUcaebo 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73164 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73164 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73164 ']' 00:12:59.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.249 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.249 [2024-11-27 09:50:00.138779] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:12:59.249 [2024-11-27 09:50:00.138918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73164 ] 00:12:59.249 [2024-11-27 09:50:00.314565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.508 [2024-11-27 09:50:00.443816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.766 [2024-11-27 09:50:00.675411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:59.766 [2024-11-27 09:50:00.675541] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.026 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:00.026 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:00.026 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.026 09:50:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:00.026 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 BaseBdev1_malloc 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 true 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 [2024-11-27 09:50:01.033084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:00.026 [2024-11-27 09:50:01.033150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.026 [2024-11-27 09:50:01.033191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:00.026 [2024-11-27 09:50:01.033203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.026 [2024-11-27 09:50:01.035736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.026 [2024-11-27 09:50:01.035816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:00.026 BaseBdev1 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 BaseBdev2_malloc 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 true 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.026 [2024-11-27 09:50:01.105524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:00.026 [2024-11-27 09:50:01.105582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.026 [2024-11-27 09:50:01.105615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:00.026 [2024-11-27 09:50:01.105626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.026 [2024-11-27 09:50:01.108001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.026 [2024-11-27 09:50:01.108049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:00.026 BaseBdev2 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.026 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 BaseBdev3_malloc 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 true 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 [2024-11-27 09:50:01.186023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:00.286 [2024-11-27 09:50:01.186076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.286 [2024-11-27 09:50:01.186109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:00.286 [2024-11-27 09:50:01.186120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.286 [2024-11-27 09:50:01.188465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.286 [2024-11-27 09:50:01.188560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:00.286 BaseBdev3 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 BaseBdev4_malloc 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 true 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 [2024-11-27 09:50:01.255116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:00.286 [2024-11-27 09:50:01.255168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.286 [2024-11-27 09:50:01.255203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:00.286 [2024-11-27 09:50:01.255214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.286 [2024-11-27 09:50:01.257584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.286 [2024-11-27 09:50:01.257626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:00.286 BaseBdev4 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.286 [2024-11-27 09:50:01.267196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:00.286 [2024-11-27 09:50:01.269272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.286 [2024-11-27 09:50:01.269418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.286 [2024-11-27 09:50:01.269489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.286 [2024-11-27 09:50:01.269722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:00.286 [2024-11-27 09:50:01.269736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:00.286 [2024-11-27 09:50:01.269990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:00.286 [2024-11-27 09:50:01.270187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:00.286 [2024-11-27 09:50:01.270199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:00.286 [2024-11-27 09:50:01.270344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.286 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.287 "name": "raid_bdev1", 00:13:00.287 "uuid": "11f931c3-4833-4091-b989-c1f98188723f", 00:13:00.287 "strip_size_kb": 64, 00:13:00.287 "state": "online", 00:13:00.287 "raid_level": "concat", 00:13:00.287 "superblock": true, 00:13:00.287 "num_base_bdevs": 4, 00:13:00.287 "num_base_bdevs_discovered": 4, 00:13:00.287 "num_base_bdevs_operational": 4, 00:13:00.287 "base_bdevs_list": [ 00:13:00.287 { 00:13:00.287 "name": "BaseBdev1", 00:13:00.287 "uuid": "a3fb0b9f-728e-5689-a415-6256dc8302c8", 00:13:00.287 "is_configured": true, 00:13:00.287 "data_offset": 2048, 00:13:00.287 "data_size": 63488 00:13:00.287 }, 00:13:00.287 { 00:13:00.287 "name": "BaseBdev2", 00:13:00.287 "uuid": "e3aac83a-8c4c-52a7-885a-7105ef8d1bcc", 00:13:00.287 "is_configured": true, 00:13:00.287 "data_offset": 2048, 00:13:00.287 "data_size": 63488 00:13:00.287 }, 00:13:00.287 { 00:13:00.287 "name": "BaseBdev3", 00:13:00.287 "uuid": "77cc22b9-66f0-555e-b986-1494be46777d", 00:13:00.287 "is_configured": true, 00:13:00.287 "data_offset": 2048, 00:13:00.287 "data_size": 63488 00:13:00.287 }, 00:13:00.287 { 00:13:00.287 "name": "BaseBdev4", 00:13:00.287 "uuid": "361b9916-73c1-59ee-96bf-c8e73a07e4e4", 00:13:00.287 "is_configured": true, 00:13:00.287 "data_offset": 2048, 00:13:00.287 "data_size": 63488 00:13:00.287 } 00:13:00.287 ] 00:13:00.287 }' 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.287 09:50:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.857 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:00.857 09:50:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:00.857 [2024-11-27 09:50:01.815657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.798 "name": "raid_bdev1", 00:13:01.798 "uuid": "11f931c3-4833-4091-b989-c1f98188723f", 00:13:01.798 "strip_size_kb": 64, 00:13:01.798 "state": "online", 00:13:01.798 "raid_level": "concat", 00:13:01.798 "superblock": true, 00:13:01.798 "num_base_bdevs": 4, 00:13:01.798 "num_base_bdevs_discovered": 4, 00:13:01.798 "num_base_bdevs_operational": 4, 00:13:01.798 "base_bdevs_list": [ 00:13:01.798 { 00:13:01.798 "name": "BaseBdev1", 00:13:01.798 "uuid": "a3fb0b9f-728e-5689-a415-6256dc8302c8", 00:13:01.798 "is_configured": true, 00:13:01.798 "data_offset": 2048, 00:13:01.798 "data_size": 63488 00:13:01.798 }, 00:13:01.798 { 00:13:01.798 "name": "BaseBdev2", 00:13:01.798 "uuid": "e3aac83a-8c4c-52a7-885a-7105ef8d1bcc", 00:13:01.798 "is_configured": true, 00:13:01.798 "data_offset": 2048, 00:13:01.798 "data_size": 63488 00:13:01.798 }, 00:13:01.798 { 00:13:01.798 "name": "BaseBdev3", 00:13:01.798 "uuid": "77cc22b9-66f0-555e-b986-1494be46777d", 00:13:01.798 "is_configured": true, 00:13:01.798 "data_offset": 2048, 00:13:01.798 "data_size": 63488 00:13:01.798 }, 00:13:01.798 { 00:13:01.798 "name": "BaseBdev4", 00:13:01.798 "uuid": "361b9916-73c1-59ee-96bf-c8e73a07e4e4", 00:13:01.798 "is_configured": true, 00:13:01.798 "data_offset": 2048, 00:13:01.798 "data_size": 63488 00:13:01.798 } 00:13:01.798 ] 00:13:01.798 }' 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.798 09:50:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.369 [2024-11-27 09:50:03.216675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:02.369 [2024-11-27 09:50:03.216722] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:02.369 [2024-11-27 09:50:03.219484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.369 [2024-11-27 09:50:03.219556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.369 [2024-11-27 09:50:03.219604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.369 [2024-11-27 09:50:03.219621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:02.369 { 00:13:02.369 "results": [ 00:13:02.369 { 00:13:02.369 "job": "raid_bdev1", 00:13:02.369 "core_mask": "0x1", 00:13:02.369 "workload": "randrw", 00:13:02.369 "percentage": 50, 00:13:02.369 "status": "finished", 00:13:02.369 "queue_depth": 1, 00:13:02.369 "io_size": 131072, 00:13:02.369 "runtime": 1.401684, 00:13:02.369 "iops": 13491.628640977568, 00:13:02.369 "mibps": 1686.453580122196, 00:13:02.369 "io_failed": 1, 00:13:02.369 "io_timeout": 0, 00:13:02.369 "avg_latency_us": 104.0886337271592, 00:13:02.369 "min_latency_us": 25.9353711790393, 00:13:02.369 "max_latency_us": 1473.844541484716 00:13:02.369 } 00:13:02.369 ], 00:13:02.369 "core_count": 1 00:13:02.369 } 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73164 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73164 ']' 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73164 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73164 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.369 killing process with pid 73164 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73164' 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73164 00:13:02.369 [2024-11-27 09:50:03.262678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.369 09:50:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73164 00:13:02.629 [2024-11-27 09:50:03.618423] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.L9tgUcaebo 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:04.012 ************************************ 00:13:04.012 00:13:04.012 real 0m4.889s 00:13:04.012 user 0m5.640s 00:13:04.012 sys 0m0.699s 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.012 09:50:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.012 END TEST raid_read_error_test 00:13:04.012 ************************************ 00:13:04.012 09:50:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:04.012 09:50:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:04.012 09:50:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:04.012 09:50:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:04.012 ************************************ 00:13:04.012 START TEST raid_write_error_test 00:13:04.012 ************************************ 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:04.012 09:50:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tTP6mdVPTV 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73310 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73310 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73310 ']' 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:04.012 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.012 [2024-11-27 09:50:05.090235] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:04.012 [2024-11-27 09:50:05.090421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73310 ] 00:13:04.272 [2024-11-27 09:50:05.264276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.531 [2024-11-27 09:50:05.405929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.531 [2024-11-27 09:50:05.641471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.531 [2024-11-27 09:50:05.641547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.790 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.790 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.790 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.790 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.790 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.790 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 BaseBdev1_malloc 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 true 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 [2024-11-27 09:50:05.977965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:05.051 [2024-11-27 09:50:05.978058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.051 [2024-11-27 09:50:05.978081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:05.051 [2024-11-27 09:50:05.978092] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.051 [2024-11-27 09:50:05.980447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.051 [2024-11-27 09:50:05.980558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:05.051 BaseBdev1 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 BaseBdev2_malloc 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 true 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 [2024-11-27 09:50:06.050401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:05.051 [2024-11-27 09:50:06.050459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.051 [2024-11-27 09:50:06.050475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:05.051 [2024-11-27 09:50:06.050487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.051 [2024-11-27 09:50:06.052874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.051 [2024-11-27 09:50:06.052916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:05.051 BaseBdev2 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 BaseBdev3_malloc 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 true 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.051 [2024-11-27 09:50:06.137069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:05.051 [2024-11-27 09:50:06.137125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.051 [2024-11-27 09:50:06.137142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:05.051 [2024-11-27 09:50:06.137154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.051 [2024-11-27 09:50:06.139531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.051 [2024-11-27 09:50:06.139637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:05.051 BaseBdev3 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.051 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.312 BaseBdev4_malloc 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.312 true 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.312 [2024-11-27 09:50:06.209835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:05.312 [2024-11-27 09:50:06.209893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.312 [2024-11-27 09:50:06.209910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:05.312 [2024-11-27 09:50:06.209922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.312 [2024-11-27 09:50:06.212334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.312 [2024-11-27 09:50:06.212414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:05.312 BaseBdev4 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.312 [2024-11-27 09:50:06.221884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.312 [2024-11-27 09:50:06.223955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:05.312 [2024-11-27 09:50:06.224099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.312 [2024-11-27 09:50:06.224167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:05.312 [2024-11-27 09:50:06.224398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:05.312 [2024-11-27 09:50:06.224413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:05.312 [2024-11-27 09:50:06.224662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:05.312 [2024-11-27 09:50:06.224832] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:05.312 [2024-11-27 09:50:06.224844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:05.312 [2024-11-27 09:50:06.225010] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.312 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.312 "name": "raid_bdev1", 00:13:05.312 "uuid": "841557d1-2da7-4009-8b4e-d2dc6d37d3c8", 00:13:05.312 "strip_size_kb": 64, 00:13:05.312 "state": "online", 00:13:05.312 "raid_level": "concat", 00:13:05.312 "superblock": true, 00:13:05.312 "num_base_bdevs": 4, 00:13:05.312 "num_base_bdevs_discovered": 4, 00:13:05.312 "num_base_bdevs_operational": 4, 00:13:05.312 "base_bdevs_list": [ 00:13:05.312 { 00:13:05.312 "name": "BaseBdev1", 00:13:05.312 "uuid": "31dbafac-53bd-58eb-a3ad-e1015bfce49f", 00:13:05.312 "is_configured": true, 00:13:05.312 "data_offset": 2048, 00:13:05.312 "data_size": 63488 00:13:05.312 }, 00:13:05.312 { 00:13:05.312 "name": "BaseBdev2", 00:13:05.312 "uuid": "b04e7bfc-4cdf-5594-98cb-b619dd21fd49", 00:13:05.312 "is_configured": true, 00:13:05.312 "data_offset": 2048, 00:13:05.312 "data_size": 63488 00:13:05.312 }, 00:13:05.313 { 00:13:05.313 "name": "BaseBdev3", 00:13:05.313 "uuid": "7781b98c-bebb-50e8-9c36-b5bb399713f0", 00:13:05.313 "is_configured": true, 00:13:05.313 "data_offset": 2048, 00:13:05.313 "data_size": 63488 00:13:05.313 }, 00:13:05.313 { 00:13:05.313 "name": "BaseBdev4", 00:13:05.313 "uuid": "f80ef6e7-25a9-5a17-8de5-d94932875ec8", 00:13:05.313 "is_configured": true, 00:13:05.313 "data_offset": 2048, 00:13:05.313 "data_size": 63488 00:13:05.313 } 00:13:05.313 ] 00:13:05.313 }' 00:13:05.313 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.313 09:50:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.891 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.891 09:50:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.891 [2024-11-27 09:50:06.806421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.846 "name": "raid_bdev1", 00:13:06.846 "uuid": "841557d1-2da7-4009-8b4e-d2dc6d37d3c8", 00:13:06.846 "strip_size_kb": 64, 00:13:06.846 "state": "online", 00:13:06.846 "raid_level": "concat", 00:13:06.846 "superblock": true, 00:13:06.846 "num_base_bdevs": 4, 00:13:06.846 "num_base_bdevs_discovered": 4, 00:13:06.846 "num_base_bdevs_operational": 4, 00:13:06.846 "base_bdevs_list": [ 00:13:06.846 { 00:13:06.846 "name": "BaseBdev1", 00:13:06.846 "uuid": "31dbafac-53bd-58eb-a3ad-e1015bfce49f", 00:13:06.846 "is_configured": true, 00:13:06.846 "data_offset": 2048, 00:13:06.846 "data_size": 63488 00:13:06.846 }, 00:13:06.846 { 00:13:06.846 "name": "BaseBdev2", 00:13:06.846 "uuid": "b04e7bfc-4cdf-5594-98cb-b619dd21fd49", 00:13:06.846 "is_configured": true, 00:13:06.846 "data_offset": 2048, 00:13:06.846 "data_size": 63488 00:13:06.846 }, 00:13:06.846 { 00:13:06.846 "name": "BaseBdev3", 00:13:06.846 "uuid": "7781b98c-bebb-50e8-9c36-b5bb399713f0", 00:13:06.846 "is_configured": true, 00:13:06.846 "data_offset": 2048, 00:13:06.846 "data_size": 63488 00:13:06.846 }, 00:13:06.846 { 00:13:06.846 "name": "BaseBdev4", 00:13:06.846 "uuid": "f80ef6e7-25a9-5a17-8de5-d94932875ec8", 00:13:06.846 "is_configured": true, 00:13:06.846 "data_offset": 2048, 00:13:06.846 "data_size": 63488 00:13:06.846 } 00:13:06.846 ] 00:13:06.846 }' 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.846 09:50:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.106 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:07.106 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.106 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.106 [2024-11-27 09:50:08.215218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:07.106 [2024-11-27 09:50:08.215256] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:07.106 [2024-11-27 09:50:08.218246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.106 [2024-11-27 09:50:08.218314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:07.106 [2024-11-27 09:50:08.218360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.106 [2024-11-27 09:50:08.218377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:07.106 { 00:13:07.106 "results": [ 00:13:07.106 { 00:13:07.106 "job": "raid_bdev1", 00:13:07.106 "core_mask": "0x1", 00:13:07.106 "workload": "randrw", 00:13:07.106 "percentage": 50, 00:13:07.106 "status": "finished", 00:13:07.106 "queue_depth": 1, 00:13:07.106 "io_size": 131072, 00:13:07.106 "runtime": 1.409443, 00:13:07.106 "iops": 13548.614594559695, 00:13:07.106 "mibps": 1693.5768243199618, 00:13:07.106 "io_failed": 1, 00:13:07.106 "io_timeout": 0, 00:13:07.106 "avg_latency_us": 103.59355869471715, 00:13:07.106 "min_latency_us": 25.6, 00:13:07.106 "max_latency_us": 1430.9170305676855 00:13:07.106 } 00:13:07.106 ], 00:13:07.106 "core_count": 1 00:13:07.107 } 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73310 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73310 ']' 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73310 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.107 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73310 00:13:07.367 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.367 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.367 killing process with pid 73310 00:13:07.367 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73310' 00:13:07.367 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73310 00:13:07.367 [2024-11-27 09:50:08.259943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.367 09:50:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73310 00:13:07.628 [2024-11-27 09:50:08.607691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tTP6mdVPTV 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:09.009 ************************************ 00:13:09.009 END TEST raid_write_error_test 00:13:09.009 ************************************ 00:13:09.009 00:13:09.009 real 0m4.902s 00:13:09.009 user 0m5.705s 00:13:09.009 sys 0m0.692s 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.009 09:50:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.009 09:50:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:09.009 09:50:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:09.009 09:50:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.009 09:50:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.009 09:50:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.009 ************************************ 00:13:09.009 START TEST raid_state_function_test 00:13:09.009 ************************************ 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73459 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73459' 00:13:09.009 Process raid pid: 73459 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73459 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73459 ']' 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.009 09:50:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.009 [2024-11-27 09:50:10.058416] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:09.009 [2024-11-27 09:50:10.058680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.270 [2024-11-27 09:50:10.239108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.270 [2024-11-27 09:50:10.372534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.529 [2024-11-27 09:50:10.607658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.530 [2024-11-27 09:50:10.607799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.790 [2024-11-27 09:50:10.874278] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:09.790 [2024-11-27 09:50:10.874379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:09.790 [2024-11-27 09:50:10.874417] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.790 [2024-11-27 09:50:10.874442] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.790 [2024-11-27 09:50:10.874460] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.790 [2024-11-27 09:50:10.874481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.790 [2024-11-27 09:50:10.874499] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:09.790 [2024-11-27 09:50:10.874520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.790 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.050 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.050 "name": "Existed_Raid", 00:13:10.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.050 "strip_size_kb": 0, 00:13:10.050 "state": "configuring", 00:13:10.050 "raid_level": "raid1", 00:13:10.050 "superblock": false, 00:13:10.050 "num_base_bdevs": 4, 00:13:10.050 "num_base_bdevs_discovered": 0, 00:13:10.050 "num_base_bdevs_operational": 4, 00:13:10.050 "base_bdevs_list": [ 00:13:10.050 { 00:13:10.050 "name": "BaseBdev1", 00:13:10.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.050 "is_configured": false, 00:13:10.050 "data_offset": 0, 00:13:10.050 "data_size": 0 00:13:10.050 }, 00:13:10.050 { 00:13:10.050 "name": "BaseBdev2", 00:13:10.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.050 "is_configured": false, 00:13:10.050 "data_offset": 0, 00:13:10.050 "data_size": 0 00:13:10.050 }, 00:13:10.050 { 00:13:10.050 "name": "BaseBdev3", 00:13:10.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.050 "is_configured": false, 00:13:10.050 "data_offset": 0, 00:13:10.050 "data_size": 0 00:13:10.050 }, 00:13:10.050 { 00:13:10.050 "name": "BaseBdev4", 00:13:10.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.050 "is_configured": false, 00:13:10.050 "data_offset": 0, 00:13:10.050 "data_size": 0 00:13:10.050 } 00:13:10.050 ] 00:13:10.050 }' 00:13:10.050 09:50:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.050 09:50:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 [2024-11-27 09:50:11.321496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.310 [2024-11-27 09:50:11.321540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 [2024-11-27 09:50:11.333454] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:10.310 [2024-11-27 09:50:11.333497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:10.310 [2024-11-27 09:50:11.333507] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.310 [2024-11-27 09:50:11.333517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.310 [2024-11-27 09:50:11.333523] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.310 [2024-11-27 09:50:11.333532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.310 [2024-11-27 09:50:11.333537] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.310 [2024-11-27 09:50:11.333546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 [2024-11-27 09:50:11.386322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.310 BaseBdev1 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.310 [ 00:13:10.310 { 00:13:10.310 "name": "BaseBdev1", 00:13:10.310 "aliases": [ 00:13:10.310 "2a914304-4f71-43fb-bdd3-9d7e57d863a4" 00:13:10.310 ], 00:13:10.310 "product_name": "Malloc disk", 00:13:10.310 "block_size": 512, 00:13:10.310 "num_blocks": 65536, 00:13:10.310 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:10.310 "assigned_rate_limits": { 00:13:10.310 "rw_ios_per_sec": 0, 00:13:10.310 "rw_mbytes_per_sec": 0, 00:13:10.310 "r_mbytes_per_sec": 0, 00:13:10.310 "w_mbytes_per_sec": 0 00:13:10.310 }, 00:13:10.310 "claimed": true, 00:13:10.310 "claim_type": "exclusive_write", 00:13:10.310 "zoned": false, 00:13:10.310 "supported_io_types": { 00:13:10.310 "read": true, 00:13:10.310 "write": true, 00:13:10.310 "unmap": true, 00:13:10.310 "flush": true, 00:13:10.310 "reset": true, 00:13:10.310 "nvme_admin": false, 00:13:10.310 "nvme_io": false, 00:13:10.310 "nvme_io_md": false, 00:13:10.310 "write_zeroes": true, 00:13:10.310 "zcopy": true, 00:13:10.310 "get_zone_info": false, 00:13:10.310 "zone_management": false, 00:13:10.310 "zone_append": false, 00:13:10.310 "compare": false, 00:13:10.310 "compare_and_write": false, 00:13:10.310 "abort": true, 00:13:10.310 "seek_hole": false, 00:13:10.310 "seek_data": false, 00:13:10.310 "copy": true, 00:13:10.310 "nvme_iov_md": false 00:13:10.310 }, 00:13:10.310 "memory_domains": [ 00:13:10.310 { 00:13:10.310 "dma_device_id": "system", 00:13:10.310 "dma_device_type": 1 00:13:10.310 }, 00:13:10.310 { 00:13:10.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.310 "dma_device_type": 2 00:13:10.310 } 00:13:10.310 ], 00:13:10.310 "driver_specific": {} 00:13:10.310 } 00:13:10.310 ] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.310 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.311 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.311 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.311 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.311 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.311 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.311 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.571 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.571 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.571 "name": "Existed_Raid", 00:13:10.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.571 "strip_size_kb": 0, 00:13:10.571 "state": "configuring", 00:13:10.571 "raid_level": "raid1", 00:13:10.571 "superblock": false, 00:13:10.571 "num_base_bdevs": 4, 00:13:10.571 "num_base_bdevs_discovered": 1, 00:13:10.571 "num_base_bdevs_operational": 4, 00:13:10.571 "base_bdevs_list": [ 00:13:10.571 { 00:13:10.571 "name": "BaseBdev1", 00:13:10.571 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:10.571 "is_configured": true, 00:13:10.571 "data_offset": 0, 00:13:10.571 "data_size": 65536 00:13:10.571 }, 00:13:10.571 { 00:13:10.571 "name": "BaseBdev2", 00:13:10.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.571 "is_configured": false, 00:13:10.571 "data_offset": 0, 00:13:10.571 "data_size": 0 00:13:10.571 }, 00:13:10.571 { 00:13:10.571 "name": "BaseBdev3", 00:13:10.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.571 "is_configured": false, 00:13:10.571 "data_offset": 0, 00:13:10.571 "data_size": 0 00:13:10.571 }, 00:13:10.571 { 00:13:10.571 "name": "BaseBdev4", 00:13:10.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.571 "is_configured": false, 00:13:10.571 "data_offset": 0, 00:13:10.571 "data_size": 0 00:13:10.571 } 00:13:10.571 ] 00:13:10.571 }' 00:13:10.571 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.571 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.831 [2024-11-27 09:50:11.845588] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:10.831 [2024-11-27 09:50:11.845654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.831 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.831 [2024-11-27 09:50:11.857625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.831 [2024-11-27 09:50:11.859837] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:10.832 [2024-11-27 09:50:11.859880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:10.832 [2024-11-27 09:50:11.859890] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:10.832 [2024-11-27 09:50:11.859917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:10.832 [2024-11-27 09:50:11.859924] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:10.832 [2024-11-27 09:50:11.859932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.832 "name": "Existed_Raid", 00:13:10.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.832 "strip_size_kb": 0, 00:13:10.832 "state": "configuring", 00:13:10.832 "raid_level": "raid1", 00:13:10.832 "superblock": false, 00:13:10.832 "num_base_bdevs": 4, 00:13:10.832 "num_base_bdevs_discovered": 1, 00:13:10.832 "num_base_bdevs_operational": 4, 00:13:10.832 "base_bdevs_list": [ 00:13:10.832 { 00:13:10.832 "name": "BaseBdev1", 00:13:10.832 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:10.832 "is_configured": true, 00:13:10.832 "data_offset": 0, 00:13:10.832 "data_size": 65536 00:13:10.832 }, 00:13:10.832 { 00:13:10.832 "name": "BaseBdev2", 00:13:10.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.832 "is_configured": false, 00:13:10.832 "data_offset": 0, 00:13:10.832 "data_size": 0 00:13:10.832 }, 00:13:10.832 { 00:13:10.832 "name": "BaseBdev3", 00:13:10.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.832 "is_configured": false, 00:13:10.832 "data_offset": 0, 00:13:10.832 "data_size": 0 00:13:10.832 }, 00:13:10.832 { 00:13:10.832 "name": "BaseBdev4", 00:13:10.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.832 "is_configured": false, 00:13:10.832 "data_offset": 0, 00:13:10.832 "data_size": 0 00:13:10.832 } 00:13:10.832 ] 00:13:10.832 }' 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.832 09:50:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.402 [2024-11-27 09:50:12.349008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.402 BaseBdev2 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.402 [ 00:13:11.402 { 00:13:11.402 "name": "BaseBdev2", 00:13:11.402 "aliases": [ 00:13:11.402 "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce" 00:13:11.402 ], 00:13:11.402 "product_name": "Malloc disk", 00:13:11.402 "block_size": 512, 00:13:11.402 "num_blocks": 65536, 00:13:11.402 "uuid": "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce", 00:13:11.402 "assigned_rate_limits": { 00:13:11.402 "rw_ios_per_sec": 0, 00:13:11.402 "rw_mbytes_per_sec": 0, 00:13:11.402 "r_mbytes_per_sec": 0, 00:13:11.402 "w_mbytes_per_sec": 0 00:13:11.402 }, 00:13:11.402 "claimed": true, 00:13:11.402 "claim_type": "exclusive_write", 00:13:11.402 "zoned": false, 00:13:11.402 "supported_io_types": { 00:13:11.402 "read": true, 00:13:11.402 "write": true, 00:13:11.402 "unmap": true, 00:13:11.402 "flush": true, 00:13:11.402 "reset": true, 00:13:11.402 "nvme_admin": false, 00:13:11.402 "nvme_io": false, 00:13:11.402 "nvme_io_md": false, 00:13:11.402 "write_zeroes": true, 00:13:11.402 "zcopy": true, 00:13:11.402 "get_zone_info": false, 00:13:11.402 "zone_management": false, 00:13:11.402 "zone_append": false, 00:13:11.402 "compare": false, 00:13:11.402 "compare_and_write": false, 00:13:11.402 "abort": true, 00:13:11.402 "seek_hole": false, 00:13:11.402 "seek_data": false, 00:13:11.402 "copy": true, 00:13:11.402 "nvme_iov_md": false 00:13:11.402 }, 00:13:11.402 "memory_domains": [ 00:13:11.402 { 00:13:11.402 "dma_device_id": "system", 00:13:11.402 "dma_device_type": 1 00:13:11.402 }, 00:13:11.402 { 00:13:11.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.402 "dma_device_type": 2 00:13:11.402 } 00:13:11.402 ], 00:13:11.402 "driver_specific": {} 00:13:11.402 } 00:13:11.402 ] 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.402 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.402 "name": "Existed_Raid", 00:13:11.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.402 "strip_size_kb": 0, 00:13:11.402 "state": "configuring", 00:13:11.402 "raid_level": "raid1", 00:13:11.402 "superblock": false, 00:13:11.403 "num_base_bdevs": 4, 00:13:11.403 "num_base_bdevs_discovered": 2, 00:13:11.403 "num_base_bdevs_operational": 4, 00:13:11.403 "base_bdevs_list": [ 00:13:11.403 { 00:13:11.403 "name": "BaseBdev1", 00:13:11.403 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:11.403 "is_configured": true, 00:13:11.403 "data_offset": 0, 00:13:11.403 "data_size": 65536 00:13:11.403 }, 00:13:11.403 { 00:13:11.403 "name": "BaseBdev2", 00:13:11.403 "uuid": "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce", 00:13:11.403 "is_configured": true, 00:13:11.403 "data_offset": 0, 00:13:11.403 "data_size": 65536 00:13:11.403 }, 00:13:11.403 { 00:13:11.403 "name": "BaseBdev3", 00:13:11.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.403 "is_configured": false, 00:13:11.403 "data_offset": 0, 00:13:11.403 "data_size": 0 00:13:11.403 }, 00:13:11.403 { 00:13:11.403 "name": "BaseBdev4", 00:13:11.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.403 "is_configured": false, 00:13:11.403 "data_offset": 0, 00:13:11.403 "data_size": 0 00:13:11.403 } 00:13:11.403 ] 00:13:11.403 }' 00:13:11.403 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.403 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.974 BaseBdev3 00:13:11.974 [2024-11-27 09:50:12.889206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.974 [ 00:13:11.974 { 00:13:11.974 "name": "BaseBdev3", 00:13:11.974 "aliases": [ 00:13:11.974 "1b0eae87-e038-41d4-b722-3bf9dbbd6dd1" 00:13:11.974 ], 00:13:11.974 "product_name": "Malloc disk", 00:13:11.974 "block_size": 512, 00:13:11.974 "num_blocks": 65536, 00:13:11.974 "uuid": "1b0eae87-e038-41d4-b722-3bf9dbbd6dd1", 00:13:11.974 "assigned_rate_limits": { 00:13:11.974 "rw_ios_per_sec": 0, 00:13:11.974 "rw_mbytes_per_sec": 0, 00:13:11.974 "r_mbytes_per_sec": 0, 00:13:11.974 "w_mbytes_per_sec": 0 00:13:11.974 }, 00:13:11.974 "claimed": true, 00:13:11.974 "claim_type": "exclusive_write", 00:13:11.974 "zoned": false, 00:13:11.974 "supported_io_types": { 00:13:11.974 "read": true, 00:13:11.974 "write": true, 00:13:11.974 "unmap": true, 00:13:11.974 "flush": true, 00:13:11.974 "reset": true, 00:13:11.974 "nvme_admin": false, 00:13:11.974 "nvme_io": false, 00:13:11.974 "nvme_io_md": false, 00:13:11.974 "write_zeroes": true, 00:13:11.974 "zcopy": true, 00:13:11.974 "get_zone_info": false, 00:13:11.974 "zone_management": false, 00:13:11.974 "zone_append": false, 00:13:11.974 "compare": false, 00:13:11.974 "compare_and_write": false, 00:13:11.974 "abort": true, 00:13:11.974 "seek_hole": false, 00:13:11.974 "seek_data": false, 00:13:11.974 "copy": true, 00:13:11.974 "nvme_iov_md": false 00:13:11.974 }, 00:13:11.974 "memory_domains": [ 00:13:11.974 { 00:13:11.974 "dma_device_id": "system", 00:13:11.974 "dma_device_type": 1 00:13:11.974 }, 00:13:11.974 { 00:13:11.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.974 "dma_device_type": 2 00:13:11.974 } 00:13:11.974 ], 00:13:11.974 "driver_specific": {} 00:13:11.974 } 00:13:11.974 ] 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.974 "name": "Existed_Raid", 00:13:11.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.974 "strip_size_kb": 0, 00:13:11.974 "state": "configuring", 00:13:11.974 "raid_level": "raid1", 00:13:11.974 "superblock": false, 00:13:11.974 "num_base_bdevs": 4, 00:13:11.974 "num_base_bdevs_discovered": 3, 00:13:11.974 "num_base_bdevs_operational": 4, 00:13:11.974 "base_bdevs_list": [ 00:13:11.974 { 00:13:11.974 "name": "BaseBdev1", 00:13:11.974 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:11.974 "is_configured": true, 00:13:11.974 "data_offset": 0, 00:13:11.974 "data_size": 65536 00:13:11.974 }, 00:13:11.974 { 00:13:11.974 "name": "BaseBdev2", 00:13:11.974 "uuid": "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce", 00:13:11.974 "is_configured": true, 00:13:11.974 "data_offset": 0, 00:13:11.974 "data_size": 65536 00:13:11.974 }, 00:13:11.974 { 00:13:11.974 "name": "BaseBdev3", 00:13:11.974 "uuid": "1b0eae87-e038-41d4-b722-3bf9dbbd6dd1", 00:13:11.974 "is_configured": true, 00:13:11.974 "data_offset": 0, 00:13:11.974 "data_size": 65536 00:13:11.974 }, 00:13:11.974 { 00:13:11.974 "name": "BaseBdev4", 00:13:11.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.974 "is_configured": false, 00:13:11.974 "data_offset": 0, 00:13:11.974 "data_size": 0 00:13:11.974 } 00:13:11.974 ] 00:13:11.974 }' 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.974 09:50:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.545 [2024-11-27 09:50:13.444561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:12.545 [2024-11-27 09:50:13.444708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:12.545 [2024-11-27 09:50:13.444722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:12.545 [2024-11-27 09:50:13.445097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:12.545 [2024-11-27 09:50:13.445304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:12.545 [2024-11-27 09:50:13.445321] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:12.545 [2024-11-27 09:50:13.445640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.545 BaseBdev4 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.545 [ 00:13:12.545 { 00:13:12.545 "name": "BaseBdev4", 00:13:12.545 "aliases": [ 00:13:12.545 "be1670c0-430c-44fa-8409-139f17a43003" 00:13:12.545 ], 00:13:12.545 "product_name": "Malloc disk", 00:13:12.545 "block_size": 512, 00:13:12.545 "num_blocks": 65536, 00:13:12.545 "uuid": "be1670c0-430c-44fa-8409-139f17a43003", 00:13:12.545 "assigned_rate_limits": { 00:13:12.545 "rw_ios_per_sec": 0, 00:13:12.545 "rw_mbytes_per_sec": 0, 00:13:12.545 "r_mbytes_per_sec": 0, 00:13:12.545 "w_mbytes_per_sec": 0 00:13:12.545 }, 00:13:12.545 "claimed": true, 00:13:12.545 "claim_type": "exclusive_write", 00:13:12.545 "zoned": false, 00:13:12.545 "supported_io_types": { 00:13:12.545 "read": true, 00:13:12.545 "write": true, 00:13:12.545 "unmap": true, 00:13:12.545 "flush": true, 00:13:12.545 "reset": true, 00:13:12.545 "nvme_admin": false, 00:13:12.545 "nvme_io": false, 00:13:12.545 "nvme_io_md": false, 00:13:12.545 "write_zeroes": true, 00:13:12.545 "zcopy": true, 00:13:12.545 "get_zone_info": false, 00:13:12.545 "zone_management": false, 00:13:12.545 "zone_append": false, 00:13:12.545 "compare": false, 00:13:12.545 "compare_and_write": false, 00:13:12.545 "abort": true, 00:13:12.545 "seek_hole": false, 00:13:12.545 "seek_data": false, 00:13:12.545 "copy": true, 00:13:12.545 "nvme_iov_md": false 00:13:12.545 }, 00:13:12.545 "memory_domains": [ 00:13:12.545 { 00:13:12.545 "dma_device_id": "system", 00:13:12.545 "dma_device_type": 1 00:13:12.545 }, 00:13:12.545 { 00:13:12.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.545 "dma_device_type": 2 00:13:12.545 } 00:13:12.545 ], 00:13:12.545 "driver_specific": {} 00:13:12.545 } 00:13:12.545 ] 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:12.545 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.546 "name": "Existed_Raid", 00:13:12.546 "uuid": "e2d77381-0ed9-41ee-9cb6-02f858fe5204", 00:13:12.546 "strip_size_kb": 0, 00:13:12.546 "state": "online", 00:13:12.546 "raid_level": "raid1", 00:13:12.546 "superblock": false, 00:13:12.546 "num_base_bdevs": 4, 00:13:12.546 "num_base_bdevs_discovered": 4, 00:13:12.546 "num_base_bdevs_operational": 4, 00:13:12.546 "base_bdevs_list": [ 00:13:12.546 { 00:13:12.546 "name": "BaseBdev1", 00:13:12.546 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:12.546 "is_configured": true, 00:13:12.546 "data_offset": 0, 00:13:12.546 "data_size": 65536 00:13:12.546 }, 00:13:12.546 { 00:13:12.546 "name": "BaseBdev2", 00:13:12.546 "uuid": "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce", 00:13:12.546 "is_configured": true, 00:13:12.546 "data_offset": 0, 00:13:12.546 "data_size": 65536 00:13:12.546 }, 00:13:12.546 { 00:13:12.546 "name": "BaseBdev3", 00:13:12.546 "uuid": "1b0eae87-e038-41d4-b722-3bf9dbbd6dd1", 00:13:12.546 "is_configured": true, 00:13:12.546 "data_offset": 0, 00:13:12.546 "data_size": 65536 00:13:12.546 }, 00:13:12.546 { 00:13:12.546 "name": "BaseBdev4", 00:13:12.546 "uuid": "be1670c0-430c-44fa-8409-139f17a43003", 00:13:12.546 "is_configured": true, 00:13:12.546 "data_offset": 0, 00:13:12.546 "data_size": 65536 00:13:12.546 } 00:13:12.546 ] 00:13:12.546 }' 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.546 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.806 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.806 [2024-11-27 09:50:13.920198] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:13.067 09:50:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.067 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:13.067 "name": "Existed_Raid", 00:13:13.067 "aliases": [ 00:13:13.067 "e2d77381-0ed9-41ee-9cb6-02f858fe5204" 00:13:13.067 ], 00:13:13.067 "product_name": "Raid Volume", 00:13:13.067 "block_size": 512, 00:13:13.067 "num_blocks": 65536, 00:13:13.067 "uuid": "e2d77381-0ed9-41ee-9cb6-02f858fe5204", 00:13:13.067 "assigned_rate_limits": { 00:13:13.067 "rw_ios_per_sec": 0, 00:13:13.067 "rw_mbytes_per_sec": 0, 00:13:13.067 "r_mbytes_per_sec": 0, 00:13:13.067 "w_mbytes_per_sec": 0 00:13:13.067 }, 00:13:13.067 "claimed": false, 00:13:13.067 "zoned": false, 00:13:13.067 "supported_io_types": { 00:13:13.067 "read": true, 00:13:13.067 "write": true, 00:13:13.067 "unmap": false, 00:13:13.067 "flush": false, 00:13:13.067 "reset": true, 00:13:13.067 "nvme_admin": false, 00:13:13.067 "nvme_io": false, 00:13:13.067 "nvme_io_md": false, 00:13:13.067 "write_zeroes": true, 00:13:13.067 "zcopy": false, 00:13:13.067 "get_zone_info": false, 00:13:13.067 "zone_management": false, 00:13:13.067 "zone_append": false, 00:13:13.067 "compare": false, 00:13:13.067 "compare_and_write": false, 00:13:13.067 "abort": false, 00:13:13.067 "seek_hole": false, 00:13:13.067 "seek_data": false, 00:13:13.067 "copy": false, 00:13:13.067 "nvme_iov_md": false 00:13:13.067 }, 00:13:13.067 "memory_domains": [ 00:13:13.067 { 00:13:13.067 "dma_device_id": "system", 00:13:13.067 "dma_device_type": 1 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.067 "dma_device_type": 2 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "system", 00:13:13.067 "dma_device_type": 1 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.067 "dma_device_type": 2 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "system", 00:13:13.067 "dma_device_type": 1 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.067 "dma_device_type": 2 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "system", 00:13:13.067 "dma_device_type": 1 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.067 "dma_device_type": 2 00:13:13.067 } 00:13:13.067 ], 00:13:13.067 "driver_specific": { 00:13:13.067 "raid": { 00:13:13.067 "uuid": "e2d77381-0ed9-41ee-9cb6-02f858fe5204", 00:13:13.067 "strip_size_kb": 0, 00:13:13.067 "state": "online", 00:13:13.067 "raid_level": "raid1", 00:13:13.067 "superblock": false, 00:13:13.067 "num_base_bdevs": 4, 00:13:13.067 "num_base_bdevs_discovered": 4, 00:13:13.067 "num_base_bdevs_operational": 4, 00:13:13.067 "base_bdevs_list": [ 00:13:13.067 { 00:13:13.067 "name": "BaseBdev1", 00:13:13.067 "uuid": "2a914304-4f71-43fb-bdd3-9d7e57d863a4", 00:13:13.067 "is_configured": true, 00:13:13.067 "data_offset": 0, 00:13:13.067 "data_size": 65536 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "name": "BaseBdev2", 00:13:13.067 "uuid": "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce", 00:13:13.067 "is_configured": true, 00:13:13.067 "data_offset": 0, 00:13:13.067 "data_size": 65536 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "name": "BaseBdev3", 00:13:13.067 "uuid": "1b0eae87-e038-41d4-b722-3bf9dbbd6dd1", 00:13:13.067 "is_configured": true, 00:13:13.067 "data_offset": 0, 00:13:13.067 "data_size": 65536 00:13:13.067 }, 00:13:13.067 { 00:13:13.067 "name": "BaseBdev4", 00:13:13.067 "uuid": "be1670c0-430c-44fa-8409-139f17a43003", 00:13:13.067 "is_configured": true, 00:13:13.067 "data_offset": 0, 00:13:13.067 "data_size": 65536 00:13:13.067 } 00:13:13.067 ] 00:13:13.067 } 00:13:13.067 } 00:13:13.067 }' 00:13:13.067 09:50:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:13.067 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:13.067 BaseBdev2 00:13:13.067 BaseBdev3 00:13:13.067 BaseBdev4' 00:13:13.067 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.067 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:13.067 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.067 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.068 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.327 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.328 [2024-11-27 09:50:14.247266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.328 "name": "Existed_Raid", 00:13:13.328 "uuid": "e2d77381-0ed9-41ee-9cb6-02f858fe5204", 00:13:13.328 "strip_size_kb": 0, 00:13:13.328 "state": "online", 00:13:13.328 "raid_level": "raid1", 00:13:13.328 "superblock": false, 00:13:13.328 "num_base_bdevs": 4, 00:13:13.328 "num_base_bdevs_discovered": 3, 00:13:13.328 "num_base_bdevs_operational": 3, 00:13:13.328 "base_bdevs_list": [ 00:13:13.328 { 00:13:13.328 "name": null, 00:13:13.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.328 "is_configured": false, 00:13:13.328 "data_offset": 0, 00:13:13.328 "data_size": 65536 00:13:13.328 }, 00:13:13.328 { 00:13:13.328 "name": "BaseBdev2", 00:13:13.328 "uuid": "ffff192a-1b07-49b0-bb8d-0eb09b51a5ce", 00:13:13.328 "is_configured": true, 00:13:13.328 "data_offset": 0, 00:13:13.328 "data_size": 65536 00:13:13.328 }, 00:13:13.328 { 00:13:13.328 "name": "BaseBdev3", 00:13:13.328 "uuid": "1b0eae87-e038-41d4-b722-3bf9dbbd6dd1", 00:13:13.328 "is_configured": true, 00:13:13.328 "data_offset": 0, 00:13:13.328 "data_size": 65536 00:13:13.328 }, 00:13:13.328 { 00:13:13.328 "name": "BaseBdev4", 00:13:13.328 "uuid": "be1670c0-430c-44fa-8409-139f17a43003", 00:13:13.328 "is_configured": true, 00:13:13.328 "data_offset": 0, 00:13:13.328 "data_size": 65536 00:13:13.328 } 00:13:13.328 ] 00:13:13.328 }' 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.328 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.897 [2024-11-27 09:50:14.790667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.897 09:50:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.897 [2024-11-27 09:50:14.951890] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.157 [2024-11-27 09:50:15.114430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:14.157 [2024-11-27 09:50:15.114593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.157 [2024-11-27 09:50:15.218139] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.157 [2024-11-27 09:50:15.218274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.157 [2024-11-27 09:50:15.218328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.157 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.418 BaseBdev2 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.418 [ 00:13:14.418 { 00:13:14.418 "name": "BaseBdev2", 00:13:14.418 "aliases": [ 00:13:14.418 "ba9f6c24-178f-493e-860e-81f2777bead6" 00:13:14.418 ], 00:13:14.418 "product_name": "Malloc disk", 00:13:14.418 "block_size": 512, 00:13:14.418 "num_blocks": 65536, 00:13:14.418 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:14.418 "assigned_rate_limits": { 00:13:14.418 "rw_ios_per_sec": 0, 00:13:14.418 "rw_mbytes_per_sec": 0, 00:13:14.418 "r_mbytes_per_sec": 0, 00:13:14.418 "w_mbytes_per_sec": 0 00:13:14.418 }, 00:13:14.418 "claimed": false, 00:13:14.418 "zoned": false, 00:13:14.418 "supported_io_types": { 00:13:14.418 "read": true, 00:13:14.418 "write": true, 00:13:14.418 "unmap": true, 00:13:14.418 "flush": true, 00:13:14.418 "reset": true, 00:13:14.418 "nvme_admin": false, 00:13:14.418 "nvme_io": false, 00:13:14.418 "nvme_io_md": false, 00:13:14.418 "write_zeroes": true, 00:13:14.418 "zcopy": true, 00:13:14.418 "get_zone_info": false, 00:13:14.418 "zone_management": false, 00:13:14.418 "zone_append": false, 00:13:14.418 "compare": false, 00:13:14.418 "compare_and_write": false, 00:13:14.418 "abort": true, 00:13:14.418 "seek_hole": false, 00:13:14.418 "seek_data": false, 00:13:14.418 "copy": true, 00:13:14.418 "nvme_iov_md": false 00:13:14.418 }, 00:13:14.418 "memory_domains": [ 00:13:14.418 { 00:13:14.418 "dma_device_id": "system", 00:13:14.418 "dma_device_type": 1 00:13:14.418 }, 00:13:14.418 { 00:13:14.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.418 "dma_device_type": 2 00:13:14.418 } 00:13:14.418 ], 00:13:14.418 "driver_specific": {} 00:13:14.418 } 00:13:14.418 ] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.418 BaseBdev3 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.418 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.418 [ 00:13:14.418 { 00:13:14.418 "name": "BaseBdev3", 00:13:14.418 "aliases": [ 00:13:14.418 "89bce345-c322-46e2-a617-6f31e9e893f8" 00:13:14.418 ], 00:13:14.418 "product_name": "Malloc disk", 00:13:14.418 "block_size": 512, 00:13:14.418 "num_blocks": 65536, 00:13:14.418 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:14.418 "assigned_rate_limits": { 00:13:14.418 "rw_ios_per_sec": 0, 00:13:14.418 "rw_mbytes_per_sec": 0, 00:13:14.418 "r_mbytes_per_sec": 0, 00:13:14.418 "w_mbytes_per_sec": 0 00:13:14.418 }, 00:13:14.418 "claimed": false, 00:13:14.418 "zoned": false, 00:13:14.418 "supported_io_types": { 00:13:14.418 "read": true, 00:13:14.418 "write": true, 00:13:14.418 "unmap": true, 00:13:14.418 "flush": true, 00:13:14.418 "reset": true, 00:13:14.418 "nvme_admin": false, 00:13:14.418 "nvme_io": false, 00:13:14.418 "nvme_io_md": false, 00:13:14.418 "write_zeroes": true, 00:13:14.418 "zcopy": true, 00:13:14.418 "get_zone_info": false, 00:13:14.418 "zone_management": false, 00:13:14.418 "zone_append": false, 00:13:14.418 "compare": false, 00:13:14.418 "compare_and_write": false, 00:13:14.418 "abort": true, 00:13:14.418 "seek_hole": false, 00:13:14.418 "seek_data": false, 00:13:14.418 "copy": true, 00:13:14.418 "nvme_iov_md": false 00:13:14.418 }, 00:13:14.418 "memory_domains": [ 00:13:14.419 { 00:13:14.419 "dma_device_id": "system", 00:13:14.419 "dma_device_type": 1 00:13:14.419 }, 00:13:14.419 { 00:13:14.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.419 "dma_device_type": 2 00:13:14.419 } 00:13:14.419 ], 00:13:14.419 "driver_specific": {} 00:13:14.419 } 00:13:14.419 ] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 BaseBdev4 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 [ 00:13:14.419 { 00:13:14.419 "name": "BaseBdev4", 00:13:14.419 "aliases": [ 00:13:14.419 "3b6653c8-4786-4030-9f4a-77099c8bc059" 00:13:14.419 ], 00:13:14.419 "product_name": "Malloc disk", 00:13:14.419 "block_size": 512, 00:13:14.419 "num_blocks": 65536, 00:13:14.419 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:14.419 "assigned_rate_limits": { 00:13:14.419 "rw_ios_per_sec": 0, 00:13:14.419 "rw_mbytes_per_sec": 0, 00:13:14.419 "r_mbytes_per_sec": 0, 00:13:14.419 "w_mbytes_per_sec": 0 00:13:14.419 }, 00:13:14.419 "claimed": false, 00:13:14.419 "zoned": false, 00:13:14.419 "supported_io_types": { 00:13:14.419 "read": true, 00:13:14.419 "write": true, 00:13:14.419 "unmap": true, 00:13:14.419 "flush": true, 00:13:14.419 "reset": true, 00:13:14.419 "nvme_admin": false, 00:13:14.419 "nvme_io": false, 00:13:14.419 "nvme_io_md": false, 00:13:14.419 "write_zeroes": true, 00:13:14.419 "zcopy": true, 00:13:14.419 "get_zone_info": false, 00:13:14.419 "zone_management": false, 00:13:14.419 "zone_append": false, 00:13:14.419 "compare": false, 00:13:14.419 "compare_and_write": false, 00:13:14.419 "abort": true, 00:13:14.419 "seek_hole": false, 00:13:14.419 "seek_data": false, 00:13:14.419 "copy": true, 00:13:14.419 "nvme_iov_md": false 00:13:14.419 }, 00:13:14.419 "memory_domains": [ 00:13:14.419 { 00:13:14.419 "dma_device_id": "system", 00:13:14.419 "dma_device_type": 1 00:13:14.419 }, 00:13:14.419 { 00:13:14.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.419 "dma_device_type": 2 00:13:14.419 } 00:13:14.419 ], 00:13:14.419 "driver_specific": {} 00:13:14.419 } 00:13:14.419 ] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.419 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.419 [2024-11-27 09:50:15.547955] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.419 [2024-11-27 09:50:15.548058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.419 [2024-11-27 09:50:15.548116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:14.679 [2024-11-27 09:50:15.550335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:14.679 [2024-11-27 09:50:15.550426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.679 "name": "Existed_Raid", 00:13:14.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.679 "strip_size_kb": 0, 00:13:14.679 "state": "configuring", 00:13:14.679 "raid_level": "raid1", 00:13:14.679 "superblock": false, 00:13:14.679 "num_base_bdevs": 4, 00:13:14.679 "num_base_bdevs_discovered": 3, 00:13:14.679 "num_base_bdevs_operational": 4, 00:13:14.679 "base_bdevs_list": [ 00:13:14.679 { 00:13:14.679 "name": "BaseBdev1", 00:13:14.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.679 "is_configured": false, 00:13:14.679 "data_offset": 0, 00:13:14.679 "data_size": 0 00:13:14.679 }, 00:13:14.679 { 00:13:14.679 "name": "BaseBdev2", 00:13:14.679 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:14.679 "is_configured": true, 00:13:14.679 "data_offset": 0, 00:13:14.679 "data_size": 65536 00:13:14.679 }, 00:13:14.679 { 00:13:14.679 "name": "BaseBdev3", 00:13:14.679 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:14.679 "is_configured": true, 00:13:14.679 "data_offset": 0, 00:13:14.679 "data_size": 65536 00:13:14.679 }, 00:13:14.679 { 00:13:14.679 "name": "BaseBdev4", 00:13:14.679 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:14.679 "is_configured": true, 00:13:14.679 "data_offset": 0, 00:13:14.679 "data_size": 65536 00:13:14.679 } 00:13:14.679 ] 00:13:14.679 }' 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.679 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.938 [2024-11-27 09:50:15.995228] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.938 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.939 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:14.939 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.939 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.939 09:50:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.939 "name": "Existed_Raid", 00:13:14.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.939 "strip_size_kb": 0, 00:13:14.939 "state": "configuring", 00:13:14.939 "raid_level": "raid1", 00:13:14.939 "superblock": false, 00:13:14.939 "num_base_bdevs": 4, 00:13:14.939 "num_base_bdevs_discovered": 2, 00:13:14.939 "num_base_bdevs_operational": 4, 00:13:14.939 "base_bdevs_list": [ 00:13:14.939 { 00:13:14.939 "name": "BaseBdev1", 00:13:14.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.939 "is_configured": false, 00:13:14.939 "data_offset": 0, 00:13:14.939 "data_size": 0 00:13:14.939 }, 00:13:14.939 { 00:13:14.939 "name": null, 00:13:14.939 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:14.939 "is_configured": false, 00:13:14.939 "data_offset": 0, 00:13:14.939 "data_size": 65536 00:13:14.939 }, 00:13:14.939 { 00:13:14.939 "name": "BaseBdev3", 00:13:14.939 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:14.939 "is_configured": true, 00:13:14.939 "data_offset": 0, 00:13:14.939 "data_size": 65536 00:13:14.939 }, 00:13:14.939 { 00:13:14.939 "name": "BaseBdev4", 00:13:14.939 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:14.939 "is_configured": true, 00:13:14.939 "data_offset": 0, 00:13:14.939 "data_size": 65536 00:13:14.939 } 00:13:14.939 ] 00:13:14.939 }' 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.939 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.508 [2024-11-27 09:50:16.573417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.508 BaseBdev1 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.508 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.509 [ 00:13:15.509 { 00:13:15.509 "name": "BaseBdev1", 00:13:15.509 "aliases": [ 00:13:15.509 "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8" 00:13:15.509 ], 00:13:15.509 "product_name": "Malloc disk", 00:13:15.509 "block_size": 512, 00:13:15.509 "num_blocks": 65536, 00:13:15.509 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:15.509 "assigned_rate_limits": { 00:13:15.509 "rw_ios_per_sec": 0, 00:13:15.509 "rw_mbytes_per_sec": 0, 00:13:15.509 "r_mbytes_per_sec": 0, 00:13:15.509 "w_mbytes_per_sec": 0 00:13:15.509 }, 00:13:15.509 "claimed": true, 00:13:15.509 "claim_type": "exclusive_write", 00:13:15.509 "zoned": false, 00:13:15.509 "supported_io_types": { 00:13:15.509 "read": true, 00:13:15.509 "write": true, 00:13:15.509 "unmap": true, 00:13:15.509 "flush": true, 00:13:15.509 "reset": true, 00:13:15.509 "nvme_admin": false, 00:13:15.509 "nvme_io": false, 00:13:15.509 "nvme_io_md": false, 00:13:15.509 "write_zeroes": true, 00:13:15.509 "zcopy": true, 00:13:15.509 "get_zone_info": false, 00:13:15.509 "zone_management": false, 00:13:15.509 "zone_append": false, 00:13:15.509 "compare": false, 00:13:15.509 "compare_and_write": false, 00:13:15.509 "abort": true, 00:13:15.509 "seek_hole": false, 00:13:15.509 "seek_data": false, 00:13:15.509 "copy": true, 00:13:15.509 "nvme_iov_md": false 00:13:15.509 }, 00:13:15.509 "memory_domains": [ 00:13:15.509 { 00:13:15.509 "dma_device_id": "system", 00:13:15.509 "dma_device_type": 1 00:13:15.509 }, 00:13:15.509 { 00:13:15.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.509 "dma_device_type": 2 00:13:15.509 } 00:13:15.509 ], 00:13:15.509 "driver_specific": {} 00:13:15.509 } 00:13:15.509 ] 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.509 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.769 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.769 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.769 "name": "Existed_Raid", 00:13:15.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.769 "strip_size_kb": 0, 00:13:15.769 "state": "configuring", 00:13:15.769 "raid_level": "raid1", 00:13:15.769 "superblock": false, 00:13:15.769 "num_base_bdevs": 4, 00:13:15.769 "num_base_bdevs_discovered": 3, 00:13:15.769 "num_base_bdevs_operational": 4, 00:13:15.769 "base_bdevs_list": [ 00:13:15.769 { 00:13:15.769 "name": "BaseBdev1", 00:13:15.769 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:15.769 "is_configured": true, 00:13:15.769 "data_offset": 0, 00:13:15.769 "data_size": 65536 00:13:15.769 }, 00:13:15.769 { 00:13:15.769 "name": null, 00:13:15.769 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:15.769 "is_configured": false, 00:13:15.769 "data_offset": 0, 00:13:15.769 "data_size": 65536 00:13:15.769 }, 00:13:15.769 { 00:13:15.769 "name": "BaseBdev3", 00:13:15.769 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:15.769 "is_configured": true, 00:13:15.769 "data_offset": 0, 00:13:15.769 "data_size": 65536 00:13:15.769 }, 00:13:15.769 { 00:13:15.769 "name": "BaseBdev4", 00:13:15.769 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:15.769 "is_configured": true, 00:13:15.769 "data_offset": 0, 00:13:15.769 "data_size": 65536 00:13:15.769 } 00:13:15.769 ] 00:13:15.769 }' 00:13:15.769 09:50:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.769 09:50:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.029 [2024-11-27 09:50:17.092599] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.029 "name": "Existed_Raid", 00:13:16.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.029 "strip_size_kb": 0, 00:13:16.029 "state": "configuring", 00:13:16.029 "raid_level": "raid1", 00:13:16.029 "superblock": false, 00:13:16.029 "num_base_bdevs": 4, 00:13:16.029 "num_base_bdevs_discovered": 2, 00:13:16.029 "num_base_bdevs_operational": 4, 00:13:16.029 "base_bdevs_list": [ 00:13:16.029 { 00:13:16.029 "name": "BaseBdev1", 00:13:16.029 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:16.029 "is_configured": true, 00:13:16.029 "data_offset": 0, 00:13:16.029 "data_size": 65536 00:13:16.029 }, 00:13:16.029 { 00:13:16.029 "name": null, 00:13:16.029 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:16.029 "is_configured": false, 00:13:16.029 "data_offset": 0, 00:13:16.029 "data_size": 65536 00:13:16.029 }, 00:13:16.029 { 00:13:16.029 "name": null, 00:13:16.029 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:16.029 "is_configured": false, 00:13:16.029 "data_offset": 0, 00:13:16.029 "data_size": 65536 00:13:16.029 }, 00:13:16.029 { 00:13:16.029 "name": "BaseBdev4", 00:13:16.029 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:16.029 "is_configured": true, 00:13:16.029 "data_offset": 0, 00:13:16.029 "data_size": 65536 00:13:16.029 } 00:13:16.029 ] 00:13:16.029 }' 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.029 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.599 [2024-11-27 09:50:17.591842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.599 "name": "Existed_Raid", 00:13:16.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.599 "strip_size_kb": 0, 00:13:16.599 "state": "configuring", 00:13:16.599 "raid_level": "raid1", 00:13:16.599 "superblock": false, 00:13:16.599 "num_base_bdevs": 4, 00:13:16.599 "num_base_bdevs_discovered": 3, 00:13:16.599 "num_base_bdevs_operational": 4, 00:13:16.599 "base_bdevs_list": [ 00:13:16.599 { 00:13:16.599 "name": "BaseBdev1", 00:13:16.599 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:16.599 "is_configured": true, 00:13:16.599 "data_offset": 0, 00:13:16.599 "data_size": 65536 00:13:16.599 }, 00:13:16.599 { 00:13:16.599 "name": null, 00:13:16.599 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:16.599 "is_configured": false, 00:13:16.599 "data_offset": 0, 00:13:16.599 "data_size": 65536 00:13:16.599 }, 00:13:16.599 { 00:13:16.599 "name": "BaseBdev3", 00:13:16.599 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:16.599 "is_configured": true, 00:13:16.599 "data_offset": 0, 00:13:16.599 "data_size": 65536 00:13:16.599 }, 00:13:16.599 { 00:13:16.599 "name": "BaseBdev4", 00:13:16.599 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:16.599 "is_configured": true, 00:13:16.599 "data_offset": 0, 00:13:16.599 "data_size": 65536 00:13:16.599 } 00:13:16.599 ] 00:13:16.599 }' 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.599 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.171 09:50:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:17.171 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.171 09:50:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 [2024-11-27 09:50:18.047089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.171 "name": "Existed_Raid", 00:13:17.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.171 "strip_size_kb": 0, 00:13:17.171 "state": "configuring", 00:13:17.171 "raid_level": "raid1", 00:13:17.171 "superblock": false, 00:13:17.171 "num_base_bdevs": 4, 00:13:17.171 "num_base_bdevs_discovered": 2, 00:13:17.171 "num_base_bdevs_operational": 4, 00:13:17.171 "base_bdevs_list": [ 00:13:17.171 { 00:13:17.171 "name": null, 00:13:17.171 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:17.171 "is_configured": false, 00:13:17.171 "data_offset": 0, 00:13:17.171 "data_size": 65536 00:13:17.171 }, 00:13:17.171 { 00:13:17.171 "name": null, 00:13:17.171 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:17.171 "is_configured": false, 00:13:17.171 "data_offset": 0, 00:13:17.171 "data_size": 65536 00:13:17.171 }, 00:13:17.171 { 00:13:17.171 "name": "BaseBdev3", 00:13:17.171 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:17.171 "is_configured": true, 00:13:17.171 "data_offset": 0, 00:13:17.171 "data_size": 65536 00:13:17.171 }, 00:13:17.171 { 00:13:17.171 "name": "BaseBdev4", 00:13:17.171 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:17.171 "is_configured": true, 00:13:17.171 "data_offset": 0, 00:13:17.171 "data_size": 65536 00:13:17.171 } 00:13:17.171 ] 00:13:17.171 }' 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.171 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.432 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.432 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.432 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.432 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:17.432 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.694 [2024-11-27 09:50:18.602547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.694 "name": "Existed_Raid", 00:13:17.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.694 "strip_size_kb": 0, 00:13:17.694 "state": "configuring", 00:13:17.694 "raid_level": "raid1", 00:13:17.694 "superblock": false, 00:13:17.694 "num_base_bdevs": 4, 00:13:17.694 "num_base_bdevs_discovered": 3, 00:13:17.694 "num_base_bdevs_operational": 4, 00:13:17.694 "base_bdevs_list": [ 00:13:17.694 { 00:13:17.694 "name": null, 00:13:17.694 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:17.694 "is_configured": false, 00:13:17.694 "data_offset": 0, 00:13:17.694 "data_size": 65536 00:13:17.694 }, 00:13:17.694 { 00:13:17.694 "name": "BaseBdev2", 00:13:17.694 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:17.694 "is_configured": true, 00:13:17.694 "data_offset": 0, 00:13:17.694 "data_size": 65536 00:13:17.694 }, 00:13:17.694 { 00:13:17.694 "name": "BaseBdev3", 00:13:17.694 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:17.694 "is_configured": true, 00:13:17.694 "data_offset": 0, 00:13:17.694 "data_size": 65536 00:13:17.694 }, 00:13:17.694 { 00:13:17.694 "name": "BaseBdev4", 00:13:17.694 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:17.694 "is_configured": true, 00:13:17.694 "data_offset": 0, 00:13:17.694 "data_size": 65536 00:13:17.694 } 00:13:17.694 ] 00:13:17.694 }' 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.694 09:50:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.955 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.218 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.218 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7a9e7ce2-0d40-471c-84b8-3f32e1f746d8 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.219 [2024-11-27 09:50:19.149597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:18.219 [2024-11-27 09:50:19.149653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:18.219 [2024-11-27 09:50:19.149663] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:18.219 [2024-11-27 09:50:19.149958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:18.219 [2024-11-27 09:50:19.150199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:18.219 [2024-11-27 09:50:19.150210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:18.219 [2024-11-27 09:50:19.150538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.219 NewBaseBdev 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.219 [ 00:13:18.219 { 00:13:18.219 "name": "NewBaseBdev", 00:13:18.219 "aliases": [ 00:13:18.219 "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8" 00:13:18.219 ], 00:13:18.219 "product_name": "Malloc disk", 00:13:18.219 "block_size": 512, 00:13:18.219 "num_blocks": 65536, 00:13:18.219 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:18.219 "assigned_rate_limits": { 00:13:18.219 "rw_ios_per_sec": 0, 00:13:18.219 "rw_mbytes_per_sec": 0, 00:13:18.219 "r_mbytes_per_sec": 0, 00:13:18.219 "w_mbytes_per_sec": 0 00:13:18.219 }, 00:13:18.219 "claimed": true, 00:13:18.219 "claim_type": "exclusive_write", 00:13:18.219 "zoned": false, 00:13:18.219 "supported_io_types": { 00:13:18.219 "read": true, 00:13:18.219 "write": true, 00:13:18.219 "unmap": true, 00:13:18.219 "flush": true, 00:13:18.219 "reset": true, 00:13:18.219 "nvme_admin": false, 00:13:18.219 "nvme_io": false, 00:13:18.219 "nvme_io_md": false, 00:13:18.219 "write_zeroes": true, 00:13:18.219 "zcopy": true, 00:13:18.219 "get_zone_info": false, 00:13:18.219 "zone_management": false, 00:13:18.219 "zone_append": false, 00:13:18.219 "compare": false, 00:13:18.219 "compare_and_write": false, 00:13:18.219 "abort": true, 00:13:18.219 "seek_hole": false, 00:13:18.219 "seek_data": false, 00:13:18.219 "copy": true, 00:13:18.219 "nvme_iov_md": false 00:13:18.219 }, 00:13:18.219 "memory_domains": [ 00:13:18.219 { 00:13:18.219 "dma_device_id": "system", 00:13:18.219 "dma_device_type": 1 00:13:18.219 }, 00:13:18.219 { 00:13:18.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.219 "dma_device_type": 2 00:13:18.219 } 00:13:18.219 ], 00:13:18.219 "driver_specific": {} 00:13:18.219 } 00:13:18.219 ] 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.219 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.219 "name": "Existed_Raid", 00:13:18.219 "uuid": "780f7c48-6efa-4969-bd7b-280e6c06e0e2", 00:13:18.219 "strip_size_kb": 0, 00:13:18.219 "state": "online", 00:13:18.219 "raid_level": "raid1", 00:13:18.219 "superblock": false, 00:13:18.219 "num_base_bdevs": 4, 00:13:18.219 "num_base_bdevs_discovered": 4, 00:13:18.219 "num_base_bdevs_operational": 4, 00:13:18.219 "base_bdevs_list": [ 00:13:18.219 { 00:13:18.219 "name": "NewBaseBdev", 00:13:18.219 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:18.219 "is_configured": true, 00:13:18.219 "data_offset": 0, 00:13:18.219 "data_size": 65536 00:13:18.219 }, 00:13:18.219 { 00:13:18.219 "name": "BaseBdev2", 00:13:18.219 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:18.219 "is_configured": true, 00:13:18.219 "data_offset": 0, 00:13:18.219 "data_size": 65536 00:13:18.219 }, 00:13:18.219 { 00:13:18.219 "name": "BaseBdev3", 00:13:18.219 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:18.219 "is_configured": true, 00:13:18.219 "data_offset": 0, 00:13:18.219 "data_size": 65536 00:13:18.219 }, 00:13:18.219 { 00:13:18.219 "name": "BaseBdev4", 00:13:18.219 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:18.219 "is_configured": true, 00:13:18.219 "data_offset": 0, 00:13:18.219 "data_size": 65536 00:13:18.219 } 00:13:18.219 ] 00:13:18.220 }' 00:13:18.220 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.220 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.790 [2024-11-27 09:50:19.645217] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.790 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:18.790 "name": "Existed_Raid", 00:13:18.790 "aliases": [ 00:13:18.790 "780f7c48-6efa-4969-bd7b-280e6c06e0e2" 00:13:18.790 ], 00:13:18.790 "product_name": "Raid Volume", 00:13:18.790 "block_size": 512, 00:13:18.790 "num_blocks": 65536, 00:13:18.790 "uuid": "780f7c48-6efa-4969-bd7b-280e6c06e0e2", 00:13:18.790 "assigned_rate_limits": { 00:13:18.790 "rw_ios_per_sec": 0, 00:13:18.790 "rw_mbytes_per_sec": 0, 00:13:18.790 "r_mbytes_per_sec": 0, 00:13:18.790 "w_mbytes_per_sec": 0 00:13:18.790 }, 00:13:18.790 "claimed": false, 00:13:18.790 "zoned": false, 00:13:18.790 "supported_io_types": { 00:13:18.790 "read": true, 00:13:18.790 "write": true, 00:13:18.790 "unmap": false, 00:13:18.790 "flush": false, 00:13:18.790 "reset": true, 00:13:18.790 "nvme_admin": false, 00:13:18.790 "nvme_io": false, 00:13:18.790 "nvme_io_md": false, 00:13:18.790 "write_zeroes": true, 00:13:18.790 "zcopy": false, 00:13:18.790 "get_zone_info": false, 00:13:18.790 "zone_management": false, 00:13:18.790 "zone_append": false, 00:13:18.790 "compare": false, 00:13:18.790 "compare_and_write": false, 00:13:18.790 "abort": false, 00:13:18.790 "seek_hole": false, 00:13:18.790 "seek_data": false, 00:13:18.790 "copy": false, 00:13:18.790 "nvme_iov_md": false 00:13:18.790 }, 00:13:18.790 "memory_domains": [ 00:13:18.790 { 00:13:18.790 "dma_device_id": "system", 00:13:18.790 "dma_device_type": 1 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.790 "dma_device_type": 2 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "system", 00:13:18.790 "dma_device_type": 1 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.790 "dma_device_type": 2 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "system", 00:13:18.790 "dma_device_type": 1 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.790 "dma_device_type": 2 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "system", 00:13:18.790 "dma_device_type": 1 00:13:18.790 }, 00:13:18.790 { 00:13:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.790 "dma_device_type": 2 00:13:18.790 } 00:13:18.790 ], 00:13:18.790 "driver_specific": { 00:13:18.790 "raid": { 00:13:18.790 "uuid": "780f7c48-6efa-4969-bd7b-280e6c06e0e2", 00:13:18.790 "strip_size_kb": 0, 00:13:18.790 "state": "online", 00:13:18.790 "raid_level": "raid1", 00:13:18.790 "superblock": false, 00:13:18.791 "num_base_bdevs": 4, 00:13:18.791 "num_base_bdevs_discovered": 4, 00:13:18.791 "num_base_bdevs_operational": 4, 00:13:18.791 "base_bdevs_list": [ 00:13:18.791 { 00:13:18.791 "name": "NewBaseBdev", 00:13:18.791 "uuid": "7a9e7ce2-0d40-471c-84b8-3f32e1f746d8", 00:13:18.791 "is_configured": true, 00:13:18.791 "data_offset": 0, 00:13:18.791 "data_size": 65536 00:13:18.791 }, 00:13:18.791 { 00:13:18.791 "name": "BaseBdev2", 00:13:18.791 "uuid": "ba9f6c24-178f-493e-860e-81f2777bead6", 00:13:18.791 "is_configured": true, 00:13:18.791 "data_offset": 0, 00:13:18.791 "data_size": 65536 00:13:18.791 }, 00:13:18.791 { 00:13:18.791 "name": "BaseBdev3", 00:13:18.791 "uuid": "89bce345-c322-46e2-a617-6f31e9e893f8", 00:13:18.791 "is_configured": true, 00:13:18.791 "data_offset": 0, 00:13:18.791 "data_size": 65536 00:13:18.791 }, 00:13:18.791 { 00:13:18.791 "name": "BaseBdev4", 00:13:18.791 "uuid": "3b6653c8-4786-4030-9f4a-77099c8bc059", 00:13:18.791 "is_configured": true, 00:13:18.791 "data_offset": 0, 00:13:18.791 "data_size": 65536 00:13:18.791 } 00:13:18.791 ] 00:13:18.791 } 00:13:18.791 } 00:13:18.791 }' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:18.791 BaseBdev2 00:13:18.791 BaseBdev3 00:13:18.791 BaseBdev4' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.791 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.052 [2024-11-27 09:50:19.928316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:19.052 [2024-11-27 09:50:19.928395] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.052 [2024-11-27 09:50:19.928499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.052 [2024-11-27 09:50:19.928836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.052 [2024-11-27 09:50:19.928851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73459 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73459 ']' 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73459 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73459 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:19.052 killing process with pid 73459 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73459' 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73459 00:13:19.052 [2024-11-27 09:50:19.975883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.052 09:50:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73459 00:13:19.312 [2024-11-27 09:50:20.405992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:20.695 ************************************ 00:13:20.695 END TEST raid_state_function_test 00:13:20.695 ************************************ 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:20.695 00:13:20.695 real 0m11.674s 00:13:20.695 user 0m18.244s 00:13:20.695 sys 0m2.180s 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.695 09:50:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:20.695 09:50:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:20.695 09:50:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.695 09:50:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:20.695 ************************************ 00:13:20.695 START TEST raid_state_function_test_sb 00:13:20.695 ************************************ 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74126 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74126' 00:13:20.695 Process raid pid: 74126 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74126 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74126 ']' 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.695 09:50:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.695 [2024-11-27 09:50:21.801483] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:20.695 [2024-11-27 09:50:21.801700] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.956 [2024-11-27 09:50:21.981402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.216 [2024-11-27 09:50:22.120695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.476 [2024-11-27 09:50:22.360027] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.476 [2024-11-27 09:50:22.360192] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.737 [2024-11-27 09:50:22.634871] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.737 [2024-11-27 09:50:22.635002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.737 [2024-11-27 09:50:22.635035] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.737 [2024-11-27 09:50:22.635060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.737 [2024-11-27 09:50:22.635078] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.737 [2024-11-27 09:50:22.635099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.737 [2024-11-27 09:50:22.635116] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.737 [2024-11-27 09:50:22.635138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.737 "name": "Existed_Raid", 00:13:21.737 "uuid": "0029ec41-05e2-4cce-a1df-9af8c85f155a", 00:13:21.737 "strip_size_kb": 0, 00:13:21.737 "state": "configuring", 00:13:21.737 "raid_level": "raid1", 00:13:21.737 "superblock": true, 00:13:21.737 "num_base_bdevs": 4, 00:13:21.737 "num_base_bdevs_discovered": 0, 00:13:21.737 "num_base_bdevs_operational": 4, 00:13:21.737 "base_bdevs_list": [ 00:13:21.737 { 00:13:21.737 "name": "BaseBdev1", 00:13:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.737 "is_configured": false, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 0 00:13:21.737 }, 00:13:21.737 { 00:13:21.737 "name": "BaseBdev2", 00:13:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.737 "is_configured": false, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 0 00:13:21.737 }, 00:13:21.737 { 00:13:21.737 "name": "BaseBdev3", 00:13:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.737 "is_configured": false, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 0 00:13:21.737 }, 00:13:21.737 { 00:13:21.737 "name": "BaseBdev4", 00:13:21.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.737 "is_configured": false, 00:13:21.737 "data_offset": 0, 00:13:21.737 "data_size": 0 00:13:21.737 } 00:13:21.737 ] 00:13:21.737 }' 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.737 09:50:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.998 [2024-11-27 09:50:23.082064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.998 [2024-11-27 09:50:23.082128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.998 [2024-11-27 09:50:23.093980] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:21.998 [2024-11-27 09:50:23.094041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:21.998 [2024-11-27 09:50:23.094051] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.998 [2024-11-27 09:50:23.094060] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.998 [2024-11-27 09:50:23.094067] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.998 [2024-11-27 09:50:23.094076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.998 [2024-11-27 09:50:23.094082] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.998 [2024-11-27 09:50:23.094090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.998 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.258 [2024-11-27 09:50:23.142914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.258 BaseBdev1 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.258 [ 00:13:22.258 { 00:13:22.258 "name": "BaseBdev1", 00:13:22.258 "aliases": [ 00:13:22.258 "c2297ff8-87df-4d7a-b9d4-431d970a0142" 00:13:22.258 ], 00:13:22.258 "product_name": "Malloc disk", 00:13:22.258 "block_size": 512, 00:13:22.258 "num_blocks": 65536, 00:13:22.258 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:22.258 "assigned_rate_limits": { 00:13:22.258 "rw_ios_per_sec": 0, 00:13:22.258 "rw_mbytes_per_sec": 0, 00:13:22.258 "r_mbytes_per_sec": 0, 00:13:22.258 "w_mbytes_per_sec": 0 00:13:22.258 }, 00:13:22.258 "claimed": true, 00:13:22.258 "claim_type": "exclusive_write", 00:13:22.258 "zoned": false, 00:13:22.258 "supported_io_types": { 00:13:22.258 "read": true, 00:13:22.258 "write": true, 00:13:22.258 "unmap": true, 00:13:22.258 "flush": true, 00:13:22.258 "reset": true, 00:13:22.258 "nvme_admin": false, 00:13:22.258 "nvme_io": false, 00:13:22.258 "nvme_io_md": false, 00:13:22.258 "write_zeroes": true, 00:13:22.258 "zcopy": true, 00:13:22.258 "get_zone_info": false, 00:13:22.258 "zone_management": false, 00:13:22.258 "zone_append": false, 00:13:22.258 "compare": false, 00:13:22.258 "compare_and_write": false, 00:13:22.258 "abort": true, 00:13:22.258 "seek_hole": false, 00:13:22.258 "seek_data": false, 00:13:22.258 "copy": true, 00:13:22.258 "nvme_iov_md": false 00:13:22.258 }, 00:13:22.258 "memory_domains": [ 00:13:22.258 { 00:13:22.258 "dma_device_id": "system", 00:13:22.258 "dma_device_type": 1 00:13:22.258 }, 00:13:22.258 { 00:13:22.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.258 "dma_device_type": 2 00:13:22.258 } 00:13:22.258 ], 00:13:22.258 "driver_specific": {} 00:13:22.258 } 00:13:22.258 ] 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.258 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.258 "name": "Existed_Raid", 00:13:22.258 "uuid": "9d96a545-ebd4-4fea-ba24-64c1a1ed9f7a", 00:13:22.258 "strip_size_kb": 0, 00:13:22.258 "state": "configuring", 00:13:22.258 "raid_level": "raid1", 00:13:22.258 "superblock": true, 00:13:22.258 "num_base_bdevs": 4, 00:13:22.258 "num_base_bdevs_discovered": 1, 00:13:22.258 "num_base_bdevs_operational": 4, 00:13:22.258 "base_bdevs_list": [ 00:13:22.258 { 00:13:22.258 "name": "BaseBdev1", 00:13:22.258 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:22.258 "is_configured": true, 00:13:22.258 "data_offset": 2048, 00:13:22.258 "data_size": 63488 00:13:22.258 }, 00:13:22.258 { 00:13:22.258 "name": "BaseBdev2", 00:13:22.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.258 "is_configured": false, 00:13:22.258 "data_offset": 0, 00:13:22.258 "data_size": 0 00:13:22.258 }, 00:13:22.258 { 00:13:22.259 "name": "BaseBdev3", 00:13:22.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.259 "is_configured": false, 00:13:22.259 "data_offset": 0, 00:13:22.259 "data_size": 0 00:13:22.259 }, 00:13:22.259 { 00:13:22.259 "name": "BaseBdev4", 00:13:22.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.259 "is_configured": false, 00:13:22.259 "data_offset": 0, 00:13:22.259 "data_size": 0 00:13:22.259 } 00:13:22.259 ] 00:13:22.259 }' 00:13:22.259 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.259 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.518 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:22.518 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.518 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.518 [2024-11-27 09:50:23.622141] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:22.518 [2024-11-27 09:50:23.622305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:22.518 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.518 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:22.518 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.519 [2024-11-27 09:50:23.634160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.519 [2024-11-27 09:50:23.635954] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:22.519 [2024-11-27 09:50:23.636065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:22.519 [2024-11-27 09:50:23.636098] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:22.519 [2024-11-27 09:50:23.636122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:22.519 [2024-11-27 09:50:23.636140] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:22.519 [2024-11-27 09:50:23.636217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.519 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.779 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.779 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.779 "name": "Existed_Raid", 00:13:22.779 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:22.779 "strip_size_kb": 0, 00:13:22.779 "state": "configuring", 00:13:22.779 "raid_level": "raid1", 00:13:22.779 "superblock": true, 00:13:22.779 "num_base_bdevs": 4, 00:13:22.779 "num_base_bdevs_discovered": 1, 00:13:22.779 "num_base_bdevs_operational": 4, 00:13:22.779 "base_bdevs_list": [ 00:13:22.779 { 00:13:22.779 "name": "BaseBdev1", 00:13:22.779 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:22.779 "is_configured": true, 00:13:22.779 "data_offset": 2048, 00:13:22.779 "data_size": 63488 00:13:22.779 }, 00:13:22.779 { 00:13:22.779 "name": "BaseBdev2", 00:13:22.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.779 "is_configured": false, 00:13:22.779 "data_offset": 0, 00:13:22.779 "data_size": 0 00:13:22.779 }, 00:13:22.779 { 00:13:22.779 "name": "BaseBdev3", 00:13:22.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.779 "is_configured": false, 00:13:22.779 "data_offset": 0, 00:13:22.779 "data_size": 0 00:13:22.779 }, 00:13:22.779 { 00:13:22.779 "name": "BaseBdev4", 00:13:22.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.779 "is_configured": false, 00:13:22.779 "data_offset": 0, 00:13:22.779 "data_size": 0 00:13:22.779 } 00:13:22.779 ] 00:13:22.779 }' 00:13:22.779 09:50:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.779 09:50:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.039 BaseBdev2 00:13:23.039 [2024-11-27 09:50:24.108675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.039 [ 00:13:23.039 { 00:13:23.039 "name": "BaseBdev2", 00:13:23.039 "aliases": [ 00:13:23.039 "88a5714d-590c-42aa-9168-d83447f08875" 00:13:23.039 ], 00:13:23.039 "product_name": "Malloc disk", 00:13:23.039 "block_size": 512, 00:13:23.039 "num_blocks": 65536, 00:13:23.039 "uuid": "88a5714d-590c-42aa-9168-d83447f08875", 00:13:23.039 "assigned_rate_limits": { 00:13:23.039 "rw_ios_per_sec": 0, 00:13:23.039 "rw_mbytes_per_sec": 0, 00:13:23.039 "r_mbytes_per_sec": 0, 00:13:23.039 "w_mbytes_per_sec": 0 00:13:23.039 }, 00:13:23.039 "claimed": true, 00:13:23.039 "claim_type": "exclusive_write", 00:13:23.039 "zoned": false, 00:13:23.039 "supported_io_types": { 00:13:23.039 "read": true, 00:13:23.039 "write": true, 00:13:23.039 "unmap": true, 00:13:23.039 "flush": true, 00:13:23.039 "reset": true, 00:13:23.039 "nvme_admin": false, 00:13:23.039 "nvme_io": false, 00:13:23.039 "nvme_io_md": false, 00:13:23.039 "write_zeroes": true, 00:13:23.039 "zcopy": true, 00:13:23.039 "get_zone_info": false, 00:13:23.039 "zone_management": false, 00:13:23.039 "zone_append": false, 00:13:23.039 "compare": false, 00:13:23.039 "compare_and_write": false, 00:13:23.039 "abort": true, 00:13:23.039 "seek_hole": false, 00:13:23.039 "seek_data": false, 00:13:23.039 "copy": true, 00:13:23.039 "nvme_iov_md": false 00:13:23.039 }, 00:13:23.039 "memory_domains": [ 00:13:23.039 { 00:13:23.039 "dma_device_id": "system", 00:13:23.039 "dma_device_type": 1 00:13:23.039 }, 00:13:23.039 { 00:13:23.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.039 "dma_device_type": 2 00:13:23.039 } 00:13:23.039 ], 00:13:23.039 "driver_specific": {} 00:13:23.039 } 00:13:23.039 ] 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.039 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.303 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.303 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.303 "name": "Existed_Raid", 00:13:23.303 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:23.303 "strip_size_kb": 0, 00:13:23.303 "state": "configuring", 00:13:23.303 "raid_level": "raid1", 00:13:23.303 "superblock": true, 00:13:23.303 "num_base_bdevs": 4, 00:13:23.303 "num_base_bdevs_discovered": 2, 00:13:23.303 "num_base_bdevs_operational": 4, 00:13:23.303 "base_bdevs_list": [ 00:13:23.303 { 00:13:23.303 "name": "BaseBdev1", 00:13:23.303 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:23.303 "is_configured": true, 00:13:23.303 "data_offset": 2048, 00:13:23.303 "data_size": 63488 00:13:23.303 }, 00:13:23.303 { 00:13:23.303 "name": "BaseBdev2", 00:13:23.303 "uuid": "88a5714d-590c-42aa-9168-d83447f08875", 00:13:23.303 "is_configured": true, 00:13:23.303 "data_offset": 2048, 00:13:23.303 "data_size": 63488 00:13:23.303 }, 00:13:23.303 { 00:13:23.303 "name": "BaseBdev3", 00:13:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.303 "is_configured": false, 00:13:23.303 "data_offset": 0, 00:13:23.303 "data_size": 0 00:13:23.303 }, 00:13:23.303 { 00:13:23.303 "name": "BaseBdev4", 00:13:23.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.303 "is_configured": false, 00:13:23.303 "data_offset": 0, 00:13:23.303 "data_size": 0 00:13:23.303 } 00:13:23.303 ] 00:13:23.303 }' 00:13:23.303 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.303 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.576 [2024-11-27 09:50:24.573352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.576 BaseBdev3 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.576 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.576 [ 00:13:23.576 { 00:13:23.576 "name": "BaseBdev3", 00:13:23.576 "aliases": [ 00:13:23.576 "7c2d1094-2546-4a5b-ac0a-b4a150d3bdfa" 00:13:23.576 ], 00:13:23.576 "product_name": "Malloc disk", 00:13:23.576 "block_size": 512, 00:13:23.576 "num_blocks": 65536, 00:13:23.577 "uuid": "7c2d1094-2546-4a5b-ac0a-b4a150d3bdfa", 00:13:23.577 "assigned_rate_limits": { 00:13:23.577 "rw_ios_per_sec": 0, 00:13:23.577 "rw_mbytes_per_sec": 0, 00:13:23.577 "r_mbytes_per_sec": 0, 00:13:23.577 "w_mbytes_per_sec": 0 00:13:23.577 }, 00:13:23.577 "claimed": true, 00:13:23.577 "claim_type": "exclusive_write", 00:13:23.577 "zoned": false, 00:13:23.577 "supported_io_types": { 00:13:23.577 "read": true, 00:13:23.577 "write": true, 00:13:23.577 "unmap": true, 00:13:23.577 "flush": true, 00:13:23.577 "reset": true, 00:13:23.577 "nvme_admin": false, 00:13:23.577 "nvme_io": false, 00:13:23.577 "nvme_io_md": false, 00:13:23.577 "write_zeroes": true, 00:13:23.577 "zcopy": true, 00:13:23.577 "get_zone_info": false, 00:13:23.577 "zone_management": false, 00:13:23.577 "zone_append": false, 00:13:23.577 "compare": false, 00:13:23.577 "compare_and_write": false, 00:13:23.577 "abort": true, 00:13:23.577 "seek_hole": false, 00:13:23.577 "seek_data": false, 00:13:23.577 "copy": true, 00:13:23.577 "nvme_iov_md": false 00:13:23.577 }, 00:13:23.577 "memory_domains": [ 00:13:23.577 { 00:13:23.577 "dma_device_id": "system", 00:13:23.577 "dma_device_type": 1 00:13:23.577 }, 00:13:23.577 { 00:13:23.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.577 "dma_device_type": 2 00:13:23.577 } 00:13:23.577 ], 00:13:23.577 "driver_specific": {} 00:13:23.577 } 00:13:23.577 ] 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.577 "name": "Existed_Raid", 00:13:23.577 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:23.577 "strip_size_kb": 0, 00:13:23.577 "state": "configuring", 00:13:23.577 "raid_level": "raid1", 00:13:23.577 "superblock": true, 00:13:23.577 "num_base_bdevs": 4, 00:13:23.577 "num_base_bdevs_discovered": 3, 00:13:23.577 "num_base_bdevs_operational": 4, 00:13:23.577 "base_bdevs_list": [ 00:13:23.577 { 00:13:23.577 "name": "BaseBdev1", 00:13:23.577 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:23.577 "is_configured": true, 00:13:23.577 "data_offset": 2048, 00:13:23.577 "data_size": 63488 00:13:23.577 }, 00:13:23.577 { 00:13:23.577 "name": "BaseBdev2", 00:13:23.577 "uuid": "88a5714d-590c-42aa-9168-d83447f08875", 00:13:23.577 "is_configured": true, 00:13:23.577 "data_offset": 2048, 00:13:23.577 "data_size": 63488 00:13:23.577 }, 00:13:23.577 { 00:13:23.577 "name": "BaseBdev3", 00:13:23.577 "uuid": "7c2d1094-2546-4a5b-ac0a-b4a150d3bdfa", 00:13:23.577 "is_configured": true, 00:13:23.577 "data_offset": 2048, 00:13:23.577 "data_size": 63488 00:13:23.577 }, 00:13:23.577 { 00:13:23.577 "name": "BaseBdev4", 00:13:23.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.577 "is_configured": false, 00:13:23.577 "data_offset": 0, 00:13:23.577 "data_size": 0 00:13:23.577 } 00:13:23.577 ] 00:13:23.577 }' 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.577 09:50:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.162 [2024-11-27 09:50:25.076846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:24.162 [2024-11-27 09:50:25.077218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:24.162 [2024-11-27 09:50:25.077274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:24.162 [2024-11-27 09:50:25.077562] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:24.162 [2024-11-27 09:50:25.077768] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:24.162 BaseBdev4 00:13:24.162 [2024-11-27 09:50:25.077825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:24.162 [2024-11-27 09:50:25.078029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.162 [ 00:13:24.162 { 00:13:24.162 "name": "BaseBdev4", 00:13:24.162 "aliases": [ 00:13:24.162 "81fc6c46-827b-45b8-b514-09f5b33431f7" 00:13:24.162 ], 00:13:24.162 "product_name": "Malloc disk", 00:13:24.162 "block_size": 512, 00:13:24.162 "num_blocks": 65536, 00:13:24.162 "uuid": "81fc6c46-827b-45b8-b514-09f5b33431f7", 00:13:24.162 "assigned_rate_limits": { 00:13:24.162 "rw_ios_per_sec": 0, 00:13:24.162 "rw_mbytes_per_sec": 0, 00:13:24.162 "r_mbytes_per_sec": 0, 00:13:24.162 "w_mbytes_per_sec": 0 00:13:24.162 }, 00:13:24.162 "claimed": true, 00:13:24.162 "claim_type": "exclusive_write", 00:13:24.162 "zoned": false, 00:13:24.162 "supported_io_types": { 00:13:24.162 "read": true, 00:13:24.162 "write": true, 00:13:24.162 "unmap": true, 00:13:24.162 "flush": true, 00:13:24.162 "reset": true, 00:13:24.162 "nvme_admin": false, 00:13:24.162 "nvme_io": false, 00:13:24.162 "nvme_io_md": false, 00:13:24.162 "write_zeroes": true, 00:13:24.162 "zcopy": true, 00:13:24.162 "get_zone_info": false, 00:13:24.162 "zone_management": false, 00:13:24.162 "zone_append": false, 00:13:24.162 "compare": false, 00:13:24.162 "compare_and_write": false, 00:13:24.162 "abort": true, 00:13:24.162 "seek_hole": false, 00:13:24.162 "seek_data": false, 00:13:24.162 "copy": true, 00:13:24.162 "nvme_iov_md": false 00:13:24.162 }, 00:13:24.162 "memory_domains": [ 00:13:24.162 { 00:13:24.162 "dma_device_id": "system", 00:13:24.162 "dma_device_type": 1 00:13:24.162 }, 00:13:24.162 { 00:13:24.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.162 "dma_device_type": 2 00:13:24.162 } 00:13:24.162 ], 00:13:24.162 "driver_specific": {} 00:13:24.162 } 00:13:24.162 ] 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:24.162 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.163 "name": "Existed_Raid", 00:13:24.163 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:24.163 "strip_size_kb": 0, 00:13:24.163 "state": "online", 00:13:24.163 "raid_level": "raid1", 00:13:24.163 "superblock": true, 00:13:24.163 "num_base_bdevs": 4, 00:13:24.163 "num_base_bdevs_discovered": 4, 00:13:24.163 "num_base_bdevs_operational": 4, 00:13:24.163 "base_bdevs_list": [ 00:13:24.163 { 00:13:24.163 "name": "BaseBdev1", 00:13:24.163 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:24.163 "is_configured": true, 00:13:24.163 "data_offset": 2048, 00:13:24.163 "data_size": 63488 00:13:24.163 }, 00:13:24.163 { 00:13:24.163 "name": "BaseBdev2", 00:13:24.163 "uuid": "88a5714d-590c-42aa-9168-d83447f08875", 00:13:24.163 "is_configured": true, 00:13:24.163 "data_offset": 2048, 00:13:24.163 "data_size": 63488 00:13:24.163 }, 00:13:24.163 { 00:13:24.163 "name": "BaseBdev3", 00:13:24.163 "uuid": "7c2d1094-2546-4a5b-ac0a-b4a150d3bdfa", 00:13:24.163 "is_configured": true, 00:13:24.163 "data_offset": 2048, 00:13:24.163 "data_size": 63488 00:13:24.163 }, 00:13:24.163 { 00:13:24.163 "name": "BaseBdev4", 00:13:24.163 "uuid": "81fc6c46-827b-45b8-b514-09f5b33431f7", 00:13:24.163 "is_configured": true, 00:13:24.163 "data_offset": 2048, 00:13:24.163 "data_size": 63488 00:13:24.163 } 00:13:24.163 ] 00:13:24.163 }' 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.163 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.734 [2024-11-27 09:50:25.584407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:24.734 "name": "Existed_Raid", 00:13:24.734 "aliases": [ 00:13:24.734 "f6d9347c-175a-430f-b44d-f5542572abdb" 00:13:24.734 ], 00:13:24.734 "product_name": "Raid Volume", 00:13:24.734 "block_size": 512, 00:13:24.734 "num_blocks": 63488, 00:13:24.734 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:24.734 "assigned_rate_limits": { 00:13:24.734 "rw_ios_per_sec": 0, 00:13:24.734 "rw_mbytes_per_sec": 0, 00:13:24.734 "r_mbytes_per_sec": 0, 00:13:24.734 "w_mbytes_per_sec": 0 00:13:24.734 }, 00:13:24.734 "claimed": false, 00:13:24.734 "zoned": false, 00:13:24.734 "supported_io_types": { 00:13:24.734 "read": true, 00:13:24.734 "write": true, 00:13:24.734 "unmap": false, 00:13:24.734 "flush": false, 00:13:24.734 "reset": true, 00:13:24.734 "nvme_admin": false, 00:13:24.734 "nvme_io": false, 00:13:24.734 "nvme_io_md": false, 00:13:24.734 "write_zeroes": true, 00:13:24.734 "zcopy": false, 00:13:24.734 "get_zone_info": false, 00:13:24.734 "zone_management": false, 00:13:24.734 "zone_append": false, 00:13:24.734 "compare": false, 00:13:24.734 "compare_and_write": false, 00:13:24.734 "abort": false, 00:13:24.734 "seek_hole": false, 00:13:24.734 "seek_data": false, 00:13:24.734 "copy": false, 00:13:24.734 "nvme_iov_md": false 00:13:24.734 }, 00:13:24.734 "memory_domains": [ 00:13:24.734 { 00:13:24.734 "dma_device_id": "system", 00:13:24.734 "dma_device_type": 1 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.734 "dma_device_type": 2 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "system", 00:13:24.734 "dma_device_type": 1 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.734 "dma_device_type": 2 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "system", 00:13:24.734 "dma_device_type": 1 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.734 "dma_device_type": 2 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "system", 00:13:24.734 "dma_device_type": 1 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.734 "dma_device_type": 2 00:13:24.734 } 00:13:24.734 ], 00:13:24.734 "driver_specific": { 00:13:24.734 "raid": { 00:13:24.734 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:24.734 "strip_size_kb": 0, 00:13:24.734 "state": "online", 00:13:24.734 "raid_level": "raid1", 00:13:24.734 "superblock": true, 00:13:24.734 "num_base_bdevs": 4, 00:13:24.734 "num_base_bdevs_discovered": 4, 00:13:24.734 "num_base_bdevs_operational": 4, 00:13:24.734 "base_bdevs_list": [ 00:13:24.734 { 00:13:24.734 "name": "BaseBdev1", 00:13:24.734 "uuid": "c2297ff8-87df-4d7a-b9d4-431d970a0142", 00:13:24.734 "is_configured": true, 00:13:24.734 "data_offset": 2048, 00:13:24.734 "data_size": 63488 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "name": "BaseBdev2", 00:13:24.734 "uuid": "88a5714d-590c-42aa-9168-d83447f08875", 00:13:24.734 "is_configured": true, 00:13:24.734 "data_offset": 2048, 00:13:24.734 "data_size": 63488 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "name": "BaseBdev3", 00:13:24.734 "uuid": "7c2d1094-2546-4a5b-ac0a-b4a150d3bdfa", 00:13:24.734 "is_configured": true, 00:13:24.734 "data_offset": 2048, 00:13:24.734 "data_size": 63488 00:13:24.734 }, 00:13:24.734 { 00:13:24.734 "name": "BaseBdev4", 00:13:24.734 "uuid": "81fc6c46-827b-45b8-b514-09f5b33431f7", 00:13:24.734 "is_configured": true, 00:13:24.734 "data_offset": 2048, 00:13:24.734 "data_size": 63488 00:13:24.734 } 00:13:24.734 ] 00:13:24.734 } 00:13:24.734 } 00:13:24.734 }' 00:13:24.734 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:24.735 BaseBdev2 00:13:24.735 BaseBdev3 00:13:24.735 BaseBdev4' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.735 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.995 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.995 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:24.995 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:24.995 09:50:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:24.995 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.995 09:50:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.995 [2024-11-27 09:50:25.907534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.995 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.995 "name": "Existed_Raid", 00:13:24.995 "uuid": "f6d9347c-175a-430f-b44d-f5542572abdb", 00:13:24.995 "strip_size_kb": 0, 00:13:24.995 "state": "online", 00:13:24.995 "raid_level": "raid1", 00:13:24.995 "superblock": true, 00:13:24.995 "num_base_bdevs": 4, 00:13:24.995 "num_base_bdevs_discovered": 3, 00:13:24.995 "num_base_bdevs_operational": 3, 00:13:24.995 "base_bdevs_list": [ 00:13:24.995 { 00:13:24.995 "name": null, 00:13:24.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.995 "is_configured": false, 00:13:24.995 "data_offset": 0, 00:13:24.995 "data_size": 63488 00:13:24.995 }, 00:13:24.995 { 00:13:24.995 "name": "BaseBdev2", 00:13:24.995 "uuid": "88a5714d-590c-42aa-9168-d83447f08875", 00:13:24.995 "is_configured": true, 00:13:24.995 "data_offset": 2048, 00:13:24.995 "data_size": 63488 00:13:24.996 }, 00:13:24.996 { 00:13:24.996 "name": "BaseBdev3", 00:13:24.996 "uuid": "7c2d1094-2546-4a5b-ac0a-b4a150d3bdfa", 00:13:24.996 "is_configured": true, 00:13:24.996 "data_offset": 2048, 00:13:24.996 "data_size": 63488 00:13:24.996 }, 00:13:24.996 { 00:13:24.996 "name": "BaseBdev4", 00:13:24.996 "uuid": "81fc6c46-827b-45b8-b514-09f5b33431f7", 00:13:24.996 "is_configured": true, 00:13:24.996 "data_offset": 2048, 00:13:24.996 "data_size": 63488 00:13:24.996 } 00:13:24.996 ] 00:13:24.996 }' 00:13:24.996 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.996 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 [2024-11-27 09:50:26.468882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.565 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.566 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.566 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:25.566 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.566 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.566 [2024-11-27 09:50:26.628881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.826 [2024-11-27 09:50:26.780692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:25.826 [2024-11-27 09:50:26.780804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.826 [2024-11-27 09:50:26.874728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.826 [2024-11-27 09:50:26.874868] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.826 [2024-11-27 09:50:26.874885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.826 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 BaseBdev2 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 09:50:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 [ 00:13:26.087 { 00:13:26.087 "name": "BaseBdev2", 00:13:26.087 "aliases": [ 00:13:26.087 "c9699ce1-13b5-48cd-9958-33380a8ba4e9" 00:13:26.087 ], 00:13:26.087 "product_name": "Malloc disk", 00:13:26.087 "block_size": 512, 00:13:26.087 "num_blocks": 65536, 00:13:26.087 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:26.087 "assigned_rate_limits": { 00:13:26.087 "rw_ios_per_sec": 0, 00:13:26.087 "rw_mbytes_per_sec": 0, 00:13:26.087 "r_mbytes_per_sec": 0, 00:13:26.087 "w_mbytes_per_sec": 0 00:13:26.087 }, 00:13:26.087 "claimed": false, 00:13:26.087 "zoned": false, 00:13:26.087 "supported_io_types": { 00:13:26.087 "read": true, 00:13:26.087 "write": true, 00:13:26.087 "unmap": true, 00:13:26.087 "flush": true, 00:13:26.087 "reset": true, 00:13:26.087 "nvme_admin": false, 00:13:26.087 "nvme_io": false, 00:13:26.087 "nvme_io_md": false, 00:13:26.087 "write_zeroes": true, 00:13:26.087 "zcopy": true, 00:13:26.087 "get_zone_info": false, 00:13:26.087 "zone_management": false, 00:13:26.087 "zone_append": false, 00:13:26.087 "compare": false, 00:13:26.087 "compare_and_write": false, 00:13:26.087 "abort": true, 00:13:26.087 "seek_hole": false, 00:13:26.087 "seek_data": false, 00:13:26.087 "copy": true, 00:13:26.087 "nvme_iov_md": false 00:13:26.087 }, 00:13:26.087 "memory_domains": [ 00:13:26.087 { 00:13:26.087 "dma_device_id": "system", 00:13:26.087 "dma_device_type": 1 00:13:26.087 }, 00:13:26.087 { 00:13:26.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.087 "dma_device_type": 2 00:13:26.087 } 00:13:26.087 ], 00:13:26.087 "driver_specific": {} 00:13:26.087 } 00:13:26.087 ] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 BaseBdev3 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 [ 00:13:26.087 { 00:13:26.087 "name": "BaseBdev3", 00:13:26.087 "aliases": [ 00:13:26.087 "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2" 00:13:26.087 ], 00:13:26.087 "product_name": "Malloc disk", 00:13:26.087 "block_size": 512, 00:13:26.087 "num_blocks": 65536, 00:13:26.087 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:26.087 "assigned_rate_limits": { 00:13:26.087 "rw_ios_per_sec": 0, 00:13:26.087 "rw_mbytes_per_sec": 0, 00:13:26.087 "r_mbytes_per_sec": 0, 00:13:26.087 "w_mbytes_per_sec": 0 00:13:26.087 }, 00:13:26.087 "claimed": false, 00:13:26.087 "zoned": false, 00:13:26.087 "supported_io_types": { 00:13:26.087 "read": true, 00:13:26.087 "write": true, 00:13:26.087 "unmap": true, 00:13:26.087 "flush": true, 00:13:26.087 "reset": true, 00:13:26.087 "nvme_admin": false, 00:13:26.087 "nvme_io": false, 00:13:26.087 "nvme_io_md": false, 00:13:26.087 "write_zeroes": true, 00:13:26.087 "zcopy": true, 00:13:26.087 "get_zone_info": false, 00:13:26.087 "zone_management": false, 00:13:26.087 "zone_append": false, 00:13:26.087 "compare": false, 00:13:26.087 "compare_and_write": false, 00:13:26.087 "abort": true, 00:13:26.087 "seek_hole": false, 00:13:26.087 "seek_data": false, 00:13:26.087 "copy": true, 00:13:26.087 "nvme_iov_md": false 00:13:26.087 }, 00:13:26.087 "memory_domains": [ 00:13:26.087 { 00:13:26.087 "dma_device_id": "system", 00:13:26.087 "dma_device_type": 1 00:13:26.087 }, 00:13:26.087 { 00:13:26.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.087 "dma_device_type": 2 00:13:26.087 } 00:13:26.087 ], 00:13:26.087 "driver_specific": {} 00:13:26.087 } 00:13:26.087 ] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 BaseBdev4 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.088 [ 00:13:26.088 { 00:13:26.088 "name": "BaseBdev4", 00:13:26.088 "aliases": [ 00:13:26.088 "d5ce4c6d-dec5-464a-8cae-c3575bff375c" 00:13:26.088 ], 00:13:26.088 "product_name": "Malloc disk", 00:13:26.088 "block_size": 512, 00:13:26.088 "num_blocks": 65536, 00:13:26.088 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:26.088 "assigned_rate_limits": { 00:13:26.088 "rw_ios_per_sec": 0, 00:13:26.088 "rw_mbytes_per_sec": 0, 00:13:26.088 "r_mbytes_per_sec": 0, 00:13:26.088 "w_mbytes_per_sec": 0 00:13:26.088 }, 00:13:26.088 "claimed": false, 00:13:26.088 "zoned": false, 00:13:26.088 "supported_io_types": { 00:13:26.088 "read": true, 00:13:26.088 "write": true, 00:13:26.088 "unmap": true, 00:13:26.088 "flush": true, 00:13:26.088 "reset": true, 00:13:26.088 "nvme_admin": false, 00:13:26.088 "nvme_io": false, 00:13:26.088 "nvme_io_md": false, 00:13:26.088 "write_zeroes": true, 00:13:26.088 "zcopy": true, 00:13:26.088 "get_zone_info": false, 00:13:26.088 "zone_management": false, 00:13:26.088 "zone_append": false, 00:13:26.088 "compare": false, 00:13:26.088 "compare_and_write": false, 00:13:26.088 "abort": true, 00:13:26.088 "seek_hole": false, 00:13:26.088 "seek_data": false, 00:13:26.088 "copy": true, 00:13:26.088 "nvme_iov_md": false 00:13:26.088 }, 00:13:26.088 "memory_domains": [ 00:13:26.088 { 00:13:26.088 "dma_device_id": "system", 00:13:26.088 "dma_device_type": 1 00:13:26.088 }, 00:13:26.088 { 00:13:26.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.088 "dma_device_type": 2 00:13:26.088 } 00:13:26.088 ], 00:13:26.088 "driver_specific": {} 00:13:26.088 } 00:13:26.088 ] 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.088 [2024-11-27 09:50:27.181915] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.088 [2024-11-27 09:50:27.181972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.088 [2024-11-27 09:50:27.182009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:26.088 [2024-11-27 09:50:27.184139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:26.088 [2024-11-27 09:50:27.184187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.088 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.348 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.348 "name": "Existed_Raid", 00:13:26.348 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:26.348 "strip_size_kb": 0, 00:13:26.348 "state": "configuring", 00:13:26.348 "raid_level": "raid1", 00:13:26.348 "superblock": true, 00:13:26.348 "num_base_bdevs": 4, 00:13:26.348 "num_base_bdevs_discovered": 3, 00:13:26.348 "num_base_bdevs_operational": 4, 00:13:26.348 "base_bdevs_list": [ 00:13:26.348 { 00:13:26.348 "name": "BaseBdev1", 00:13:26.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.348 "is_configured": false, 00:13:26.348 "data_offset": 0, 00:13:26.348 "data_size": 0 00:13:26.348 }, 00:13:26.348 { 00:13:26.348 "name": "BaseBdev2", 00:13:26.348 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:26.348 "is_configured": true, 00:13:26.348 "data_offset": 2048, 00:13:26.348 "data_size": 63488 00:13:26.348 }, 00:13:26.348 { 00:13:26.348 "name": "BaseBdev3", 00:13:26.348 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:26.348 "is_configured": true, 00:13:26.348 "data_offset": 2048, 00:13:26.348 "data_size": 63488 00:13:26.348 }, 00:13:26.348 { 00:13:26.348 "name": "BaseBdev4", 00:13:26.348 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:26.348 "is_configured": true, 00:13:26.348 "data_offset": 2048, 00:13:26.348 "data_size": 63488 00:13:26.348 } 00:13:26.348 ] 00:13:26.348 }' 00:13:26.348 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.348 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.607 [2024-11-27 09:50:27.605233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.607 "name": "Existed_Raid", 00:13:26.607 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:26.607 "strip_size_kb": 0, 00:13:26.607 "state": "configuring", 00:13:26.607 "raid_level": "raid1", 00:13:26.607 "superblock": true, 00:13:26.607 "num_base_bdevs": 4, 00:13:26.607 "num_base_bdevs_discovered": 2, 00:13:26.607 "num_base_bdevs_operational": 4, 00:13:26.607 "base_bdevs_list": [ 00:13:26.607 { 00:13:26.607 "name": "BaseBdev1", 00:13:26.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.607 "is_configured": false, 00:13:26.607 "data_offset": 0, 00:13:26.607 "data_size": 0 00:13:26.607 }, 00:13:26.607 { 00:13:26.607 "name": null, 00:13:26.607 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:26.607 "is_configured": false, 00:13:26.607 "data_offset": 0, 00:13:26.607 "data_size": 63488 00:13:26.607 }, 00:13:26.607 { 00:13:26.607 "name": "BaseBdev3", 00:13:26.607 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:26.607 "is_configured": true, 00:13:26.607 "data_offset": 2048, 00:13:26.607 "data_size": 63488 00:13:26.607 }, 00:13:26.607 { 00:13:26.607 "name": "BaseBdev4", 00:13:26.607 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:26.607 "is_configured": true, 00:13:26.607 "data_offset": 2048, 00:13:26.607 "data_size": 63488 00:13:26.607 } 00:13:26.607 ] 00:13:26.607 }' 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.607 09:50:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 [2024-11-27 09:50:28.117773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.178 BaseBdev1 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.178 [ 00:13:27.178 { 00:13:27.178 "name": "BaseBdev1", 00:13:27.178 "aliases": [ 00:13:27.178 "c7dad148-00dd-47f5-b34f-358f2648dbd7" 00:13:27.178 ], 00:13:27.178 "product_name": "Malloc disk", 00:13:27.178 "block_size": 512, 00:13:27.178 "num_blocks": 65536, 00:13:27.178 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:27.178 "assigned_rate_limits": { 00:13:27.178 "rw_ios_per_sec": 0, 00:13:27.178 "rw_mbytes_per_sec": 0, 00:13:27.178 "r_mbytes_per_sec": 0, 00:13:27.178 "w_mbytes_per_sec": 0 00:13:27.178 }, 00:13:27.178 "claimed": true, 00:13:27.178 "claim_type": "exclusive_write", 00:13:27.178 "zoned": false, 00:13:27.178 "supported_io_types": { 00:13:27.178 "read": true, 00:13:27.178 "write": true, 00:13:27.178 "unmap": true, 00:13:27.178 "flush": true, 00:13:27.178 "reset": true, 00:13:27.178 "nvme_admin": false, 00:13:27.178 "nvme_io": false, 00:13:27.178 "nvme_io_md": false, 00:13:27.178 "write_zeroes": true, 00:13:27.178 "zcopy": true, 00:13:27.178 "get_zone_info": false, 00:13:27.178 "zone_management": false, 00:13:27.178 "zone_append": false, 00:13:27.178 "compare": false, 00:13:27.178 "compare_and_write": false, 00:13:27.178 "abort": true, 00:13:27.178 "seek_hole": false, 00:13:27.178 "seek_data": false, 00:13:27.178 "copy": true, 00:13:27.178 "nvme_iov_md": false 00:13:27.178 }, 00:13:27.178 "memory_domains": [ 00:13:27.178 { 00:13:27.178 "dma_device_id": "system", 00:13:27.178 "dma_device_type": 1 00:13:27.178 }, 00:13:27.178 { 00:13:27.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.178 "dma_device_type": 2 00:13:27.178 } 00:13:27.178 ], 00:13:27.178 "driver_specific": {} 00:13:27.178 } 00:13:27.178 ] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.178 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.179 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.179 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.179 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.179 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.179 "name": "Existed_Raid", 00:13:27.179 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:27.179 "strip_size_kb": 0, 00:13:27.179 "state": "configuring", 00:13:27.179 "raid_level": "raid1", 00:13:27.179 "superblock": true, 00:13:27.179 "num_base_bdevs": 4, 00:13:27.179 "num_base_bdevs_discovered": 3, 00:13:27.179 "num_base_bdevs_operational": 4, 00:13:27.179 "base_bdevs_list": [ 00:13:27.179 { 00:13:27.179 "name": "BaseBdev1", 00:13:27.179 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:27.179 "is_configured": true, 00:13:27.179 "data_offset": 2048, 00:13:27.179 "data_size": 63488 00:13:27.179 }, 00:13:27.179 { 00:13:27.179 "name": null, 00:13:27.179 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:27.179 "is_configured": false, 00:13:27.179 "data_offset": 0, 00:13:27.179 "data_size": 63488 00:13:27.179 }, 00:13:27.179 { 00:13:27.179 "name": "BaseBdev3", 00:13:27.179 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:27.179 "is_configured": true, 00:13:27.179 "data_offset": 2048, 00:13:27.179 "data_size": 63488 00:13:27.179 }, 00:13:27.179 { 00:13:27.179 "name": "BaseBdev4", 00:13:27.179 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:27.179 "is_configured": true, 00:13:27.179 "data_offset": 2048, 00:13:27.179 "data_size": 63488 00:13:27.179 } 00:13:27.179 ] 00:13:27.179 }' 00:13:27.179 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.179 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.439 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.439 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.439 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.439 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 [2024-11-27 09:50:28.593054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.699 "name": "Existed_Raid", 00:13:27.699 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:27.699 "strip_size_kb": 0, 00:13:27.699 "state": "configuring", 00:13:27.699 "raid_level": "raid1", 00:13:27.699 "superblock": true, 00:13:27.699 "num_base_bdevs": 4, 00:13:27.699 "num_base_bdevs_discovered": 2, 00:13:27.699 "num_base_bdevs_operational": 4, 00:13:27.699 "base_bdevs_list": [ 00:13:27.699 { 00:13:27.699 "name": "BaseBdev1", 00:13:27.699 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:27.699 "is_configured": true, 00:13:27.699 "data_offset": 2048, 00:13:27.699 "data_size": 63488 00:13:27.699 }, 00:13:27.699 { 00:13:27.699 "name": null, 00:13:27.699 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:27.699 "is_configured": false, 00:13:27.699 "data_offset": 0, 00:13:27.699 "data_size": 63488 00:13:27.699 }, 00:13:27.699 { 00:13:27.699 "name": null, 00:13:27.699 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:27.699 "is_configured": false, 00:13:27.699 "data_offset": 0, 00:13:27.699 "data_size": 63488 00:13:27.699 }, 00:13:27.699 { 00:13:27.699 "name": "BaseBdev4", 00:13:27.699 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:27.699 "is_configured": true, 00:13:27.699 "data_offset": 2048, 00:13:27.699 "data_size": 63488 00:13:27.699 } 00:13:27.699 ] 00:13:27.699 }' 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.699 09:50:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.960 [2024-11-27 09:50:29.056211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.960 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.220 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.220 "name": "Existed_Raid", 00:13:28.220 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:28.220 "strip_size_kb": 0, 00:13:28.220 "state": "configuring", 00:13:28.220 "raid_level": "raid1", 00:13:28.220 "superblock": true, 00:13:28.220 "num_base_bdevs": 4, 00:13:28.220 "num_base_bdevs_discovered": 3, 00:13:28.220 "num_base_bdevs_operational": 4, 00:13:28.220 "base_bdevs_list": [ 00:13:28.220 { 00:13:28.220 "name": "BaseBdev1", 00:13:28.220 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:28.220 "is_configured": true, 00:13:28.220 "data_offset": 2048, 00:13:28.220 "data_size": 63488 00:13:28.220 }, 00:13:28.220 { 00:13:28.220 "name": null, 00:13:28.220 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:28.220 "is_configured": false, 00:13:28.220 "data_offset": 0, 00:13:28.220 "data_size": 63488 00:13:28.220 }, 00:13:28.220 { 00:13:28.220 "name": "BaseBdev3", 00:13:28.220 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:28.220 "is_configured": true, 00:13:28.220 "data_offset": 2048, 00:13:28.220 "data_size": 63488 00:13:28.220 }, 00:13:28.220 { 00:13:28.220 "name": "BaseBdev4", 00:13:28.220 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:28.220 "is_configured": true, 00:13:28.220 "data_offset": 2048, 00:13:28.220 "data_size": 63488 00:13:28.220 } 00:13:28.220 ] 00:13:28.220 }' 00:13:28.220 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.220 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.480 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.480 [2024-11-27 09:50:29.515587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.741 "name": "Existed_Raid", 00:13:28.741 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:28.741 "strip_size_kb": 0, 00:13:28.741 "state": "configuring", 00:13:28.741 "raid_level": "raid1", 00:13:28.741 "superblock": true, 00:13:28.741 "num_base_bdevs": 4, 00:13:28.741 "num_base_bdevs_discovered": 2, 00:13:28.741 "num_base_bdevs_operational": 4, 00:13:28.741 "base_bdevs_list": [ 00:13:28.741 { 00:13:28.741 "name": null, 00:13:28.741 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:28.741 "is_configured": false, 00:13:28.741 "data_offset": 0, 00:13:28.741 "data_size": 63488 00:13:28.741 }, 00:13:28.741 { 00:13:28.741 "name": null, 00:13:28.741 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:28.741 "is_configured": false, 00:13:28.741 "data_offset": 0, 00:13:28.741 "data_size": 63488 00:13:28.741 }, 00:13:28.741 { 00:13:28.741 "name": "BaseBdev3", 00:13:28.741 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:28.741 "is_configured": true, 00:13:28.741 "data_offset": 2048, 00:13:28.741 "data_size": 63488 00:13:28.741 }, 00:13:28.741 { 00:13:28.741 "name": "BaseBdev4", 00:13:28.741 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:28.741 "is_configured": true, 00:13:28.741 "data_offset": 2048, 00:13:28.741 "data_size": 63488 00:13:28.741 } 00:13:28.741 ] 00:13:28.741 }' 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.741 09:50:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 [2024-11-27 09:50:30.081755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.002 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.262 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.262 "name": "Existed_Raid", 00:13:29.262 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:29.262 "strip_size_kb": 0, 00:13:29.262 "state": "configuring", 00:13:29.262 "raid_level": "raid1", 00:13:29.262 "superblock": true, 00:13:29.262 "num_base_bdevs": 4, 00:13:29.262 "num_base_bdevs_discovered": 3, 00:13:29.262 "num_base_bdevs_operational": 4, 00:13:29.262 "base_bdevs_list": [ 00:13:29.262 { 00:13:29.262 "name": null, 00:13:29.262 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:29.262 "is_configured": false, 00:13:29.262 "data_offset": 0, 00:13:29.262 "data_size": 63488 00:13:29.262 }, 00:13:29.262 { 00:13:29.262 "name": "BaseBdev2", 00:13:29.262 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:29.262 "is_configured": true, 00:13:29.262 "data_offset": 2048, 00:13:29.262 "data_size": 63488 00:13:29.262 }, 00:13:29.262 { 00:13:29.262 "name": "BaseBdev3", 00:13:29.262 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:29.262 "is_configured": true, 00:13:29.262 "data_offset": 2048, 00:13:29.262 "data_size": 63488 00:13:29.262 }, 00:13:29.262 { 00:13:29.262 "name": "BaseBdev4", 00:13:29.262 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:29.262 "is_configured": true, 00:13:29.262 "data_offset": 2048, 00:13:29.262 "data_size": 63488 00:13:29.262 } 00:13:29.262 ] 00:13:29.262 }' 00:13:29.262 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.262 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c7dad148-00dd-47f5-b34f-358f2648dbd7 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.522 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.782 [2024-11-27 09:50:30.657344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.782 [2024-11-27 09:50:30.657575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.783 [2024-11-27 09:50:30.657592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:29.783 [2024-11-27 09:50:30.657855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:29.783 [2024-11-27 09:50:30.658045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.783 [2024-11-27 09:50:30.658057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:29.783 [2024-11-27 09:50:30.658196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.783 NewBaseBdev 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.783 [ 00:13:29.783 { 00:13:29.783 "name": "NewBaseBdev", 00:13:29.783 "aliases": [ 00:13:29.783 "c7dad148-00dd-47f5-b34f-358f2648dbd7" 00:13:29.783 ], 00:13:29.783 "product_name": "Malloc disk", 00:13:29.783 "block_size": 512, 00:13:29.783 "num_blocks": 65536, 00:13:29.783 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:29.783 "assigned_rate_limits": { 00:13:29.783 "rw_ios_per_sec": 0, 00:13:29.783 "rw_mbytes_per_sec": 0, 00:13:29.783 "r_mbytes_per_sec": 0, 00:13:29.783 "w_mbytes_per_sec": 0 00:13:29.783 }, 00:13:29.783 "claimed": true, 00:13:29.783 "claim_type": "exclusive_write", 00:13:29.783 "zoned": false, 00:13:29.783 "supported_io_types": { 00:13:29.783 "read": true, 00:13:29.783 "write": true, 00:13:29.783 "unmap": true, 00:13:29.783 "flush": true, 00:13:29.783 "reset": true, 00:13:29.783 "nvme_admin": false, 00:13:29.783 "nvme_io": false, 00:13:29.783 "nvme_io_md": false, 00:13:29.783 "write_zeroes": true, 00:13:29.783 "zcopy": true, 00:13:29.783 "get_zone_info": false, 00:13:29.783 "zone_management": false, 00:13:29.783 "zone_append": false, 00:13:29.783 "compare": false, 00:13:29.783 "compare_and_write": false, 00:13:29.783 "abort": true, 00:13:29.783 "seek_hole": false, 00:13:29.783 "seek_data": false, 00:13:29.783 "copy": true, 00:13:29.783 "nvme_iov_md": false 00:13:29.783 }, 00:13:29.783 "memory_domains": [ 00:13:29.783 { 00:13:29.783 "dma_device_id": "system", 00:13:29.783 "dma_device_type": 1 00:13:29.783 }, 00:13:29.783 { 00:13:29.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.783 "dma_device_type": 2 00:13:29.783 } 00:13:29.783 ], 00:13:29.783 "driver_specific": {} 00:13:29.783 } 00:13:29.783 ] 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.783 "name": "Existed_Raid", 00:13:29.783 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:29.783 "strip_size_kb": 0, 00:13:29.783 "state": "online", 00:13:29.783 "raid_level": "raid1", 00:13:29.783 "superblock": true, 00:13:29.783 "num_base_bdevs": 4, 00:13:29.783 "num_base_bdevs_discovered": 4, 00:13:29.783 "num_base_bdevs_operational": 4, 00:13:29.783 "base_bdevs_list": [ 00:13:29.783 { 00:13:29.783 "name": "NewBaseBdev", 00:13:29.783 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:29.783 "is_configured": true, 00:13:29.783 "data_offset": 2048, 00:13:29.783 "data_size": 63488 00:13:29.783 }, 00:13:29.783 { 00:13:29.783 "name": "BaseBdev2", 00:13:29.783 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:29.783 "is_configured": true, 00:13:29.783 "data_offset": 2048, 00:13:29.783 "data_size": 63488 00:13:29.783 }, 00:13:29.783 { 00:13:29.783 "name": "BaseBdev3", 00:13:29.783 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:29.783 "is_configured": true, 00:13:29.783 "data_offset": 2048, 00:13:29.783 "data_size": 63488 00:13:29.783 }, 00:13:29.783 { 00:13:29.783 "name": "BaseBdev4", 00:13:29.783 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:29.783 "is_configured": true, 00:13:29.783 "data_offset": 2048, 00:13:29.783 "data_size": 63488 00:13:29.783 } 00:13:29.783 ] 00:13:29.783 }' 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.783 09:50:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.044 [2024-11-27 09:50:31.144918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.044 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.305 "name": "Existed_Raid", 00:13:30.305 "aliases": [ 00:13:30.305 "d8ae66f5-2ccc-463f-88d1-b68e15f7817b" 00:13:30.305 ], 00:13:30.305 "product_name": "Raid Volume", 00:13:30.305 "block_size": 512, 00:13:30.305 "num_blocks": 63488, 00:13:30.305 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:30.305 "assigned_rate_limits": { 00:13:30.305 "rw_ios_per_sec": 0, 00:13:30.305 "rw_mbytes_per_sec": 0, 00:13:30.305 "r_mbytes_per_sec": 0, 00:13:30.305 "w_mbytes_per_sec": 0 00:13:30.305 }, 00:13:30.305 "claimed": false, 00:13:30.305 "zoned": false, 00:13:30.305 "supported_io_types": { 00:13:30.305 "read": true, 00:13:30.305 "write": true, 00:13:30.305 "unmap": false, 00:13:30.305 "flush": false, 00:13:30.305 "reset": true, 00:13:30.305 "nvme_admin": false, 00:13:30.305 "nvme_io": false, 00:13:30.305 "nvme_io_md": false, 00:13:30.305 "write_zeroes": true, 00:13:30.305 "zcopy": false, 00:13:30.305 "get_zone_info": false, 00:13:30.305 "zone_management": false, 00:13:30.305 "zone_append": false, 00:13:30.305 "compare": false, 00:13:30.305 "compare_and_write": false, 00:13:30.305 "abort": false, 00:13:30.305 "seek_hole": false, 00:13:30.305 "seek_data": false, 00:13:30.305 "copy": false, 00:13:30.305 "nvme_iov_md": false 00:13:30.305 }, 00:13:30.305 "memory_domains": [ 00:13:30.305 { 00:13:30.305 "dma_device_id": "system", 00:13:30.305 "dma_device_type": 1 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.305 "dma_device_type": 2 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "system", 00:13:30.305 "dma_device_type": 1 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.305 "dma_device_type": 2 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "system", 00:13:30.305 "dma_device_type": 1 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.305 "dma_device_type": 2 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "system", 00:13:30.305 "dma_device_type": 1 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.305 "dma_device_type": 2 00:13:30.305 } 00:13:30.305 ], 00:13:30.305 "driver_specific": { 00:13:30.305 "raid": { 00:13:30.305 "uuid": "d8ae66f5-2ccc-463f-88d1-b68e15f7817b", 00:13:30.305 "strip_size_kb": 0, 00:13:30.305 "state": "online", 00:13:30.305 "raid_level": "raid1", 00:13:30.305 "superblock": true, 00:13:30.305 "num_base_bdevs": 4, 00:13:30.305 "num_base_bdevs_discovered": 4, 00:13:30.305 "num_base_bdevs_operational": 4, 00:13:30.305 "base_bdevs_list": [ 00:13:30.305 { 00:13:30.305 "name": "NewBaseBdev", 00:13:30.305 "uuid": "c7dad148-00dd-47f5-b34f-358f2648dbd7", 00:13:30.305 "is_configured": true, 00:13:30.305 "data_offset": 2048, 00:13:30.305 "data_size": 63488 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "name": "BaseBdev2", 00:13:30.305 "uuid": "c9699ce1-13b5-48cd-9958-33380a8ba4e9", 00:13:30.305 "is_configured": true, 00:13:30.305 "data_offset": 2048, 00:13:30.305 "data_size": 63488 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "name": "BaseBdev3", 00:13:30.305 "uuid": "b3801e25-61ae-4e0a-a88b-1a06fa8bf0f2", 00:13:30.305 "is_configured": true, 00:13:30.305 "data_offset": 2048, 00:13:30.305 "data_size": 63488 00:13:30.305 }, 00:13:30.305 { 00:13:30.305 "name": "BaseBdev4", 00:13:30.305 "uuid": "d5ce4c6d-dec5-464a-8cae-c3575bff375c", 00:13:30.305 "is_configured": true, 00:13:30.305 "data_offset": 2048, 00:13:30.305 "data_size": 63488 00:13:30.305 } 00:13:30.305 ] 00:13:30.305 } 00:13:30.305 } 00:13:30.305 }' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.305 BaseBdev2 00:13:30.305 BaseBdev3 00:13:30.305 BaseBdev4' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.305 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.306 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.566 [2024-11-27 09:50:31.456067] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.566 [2024-11-27 09:50:31.456200] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.566 [2024-11-27 09:50:31.456294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.566 [2024-11-27 09:50:31.456637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.566 [2024-11-27 09:50:31.456654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74126 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74126 ']' 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74126 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74126 00:13:30.566 killing process with pid 74126 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74126' 00:13:30.566 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74126 00:13:30.567 [2024-11-27 09:50:31.497622] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.567 09:50:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74126 00:13:30.827 [2024-11-27 09:50:31.892574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.210 09:50:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:32.210 00:13:32.210 real 0m11.325s 00:13:32.210 user 0m17.865s 00:13:32.210 sys 0m2.082s 00:13:32.210 09:50:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:32.210 ************************************ 00:13:32.210 END TEST raid_state_function_test_sb 00:13:32.210 ************************************ 00:13:32.210 09:50:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.210 09:50:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:32.210 09:50:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:32.210 09:50:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:32.210 09:50:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.210 ************************************ 00:13:32.210 START TEST raid_superblock_test 00:13:32.210 ************************************ 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74799 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74799 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74799 ']' 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:32.210 09:50:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.210 [2024-11-27 09:50:33.192132] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:32.210 [2024-11-27 09:50:33.192276] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74799 ] 00:13:32.470 [2024-11-27 09:50:33.360202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.470 [2024-11-27 09:50:33.473339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.730 [2024-11-27 09:50:33.676272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.730 [2024-11-27 09:50:33.676360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.991 malloc1 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.991 [2024-11-27 09:50:34.071741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:32.991 [2024-11-27 09:50:34.071804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.991 [2024-11-27 09:50:34.071829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:32.991 [2024-11-27 09:50:34.071839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.991 [2024-11-27 09:50:34.073957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.991 [2024-11-27 09:50:34.073994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:32.991 pt1 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:32.991 malloc2 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.991 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 [2024-11-27 09:50:34.126613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:33.252 [2024-11-27 09:50:34.126675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.252 [2024-11-27 09:50:34.126701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:33.252 [2024-11-27 09:50:34.126710] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.252 [2024-11-27 09:50:34.128825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.252 [2024-11-27 09:50:34.128864] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:33.252 pt2 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 malloc3 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 [2024-11-27 09:50:34.193804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:33.252 [2024-11-27 09:50:34.193864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.252 [2024-11-27 09:50:34.193884] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:33.252 [2024-11-27 09:50:34.193893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.252 [2024-11-27 09:50:34.195967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.252 [2024-11-27 09:50:34.196012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:33.252 pt3 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 malloc4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 [2024-11-27 09:50:34.253249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:33.252 [2024-11-27 09:50:34.253316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.252 [2024-11-27 09:50:34.253340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:33.252 [2024-11-27 09:50:34.253350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.252 [2024-11-27 09:50:34.255546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.252 [2024-11-27 09:50:34.255579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:33.252 pt4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 [2024-11-27 09:50:34.265239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:33.252 [2024-11-27 09:50:34.267008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:33.252 [2024-11-27 09:50:34.267072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:33.252 [2024-11-27 09:50:34.267132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:33.252 [2024-11-27 09:50:34.267304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:33.252 [2024-11-27 09:50:34.267329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:33.252 [2024-11-27 09:50:34.267567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:33.252 [2024-11-27 09:50:34.267744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:33.252 [2024-11-27 09:50:34.267766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:33.252 [2024-11-27 09:50:34.267898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.252 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.252 "name": "raid_bdev1", 00:13:33.252 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:33.252 "strip_size_kb": 0, 00:13:33.252 "state": "online", 00:13:33.252 "raid_level": "raid1", 00:13:33.252 "superblock": true, 00:13:33.252 "num_base_bdevs": 4, 00:13:33.252 "num_base_bdevs_discovered": 4, 00:13:33.252 "num_base_bdevs_operational": 4, 00:13:33.252 "base_bdevs_list": [ 00:13:33.252 { 00:13:33.252 "name": "pt1", 00:13:33.252 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.252 "is_configured": true, 00:13:33.252 "data_offset": 2048, 00:13:33.253 "data_size": 63488 00:13:33.253 }, 00:13:33.253 { 00:13:33.253 "name": "pt2", 00:13:33.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.253 "is_configured": true, 00:13:33.253 "data_offset": 2048, 00:13:33.253 "data_size": 63488 00:13:33.253 }, 00:13:33.253 { 00:13:33.253 "name": "pt3", 00:13:33.253 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.253 "is_configured": true, 00:13:33.253 "data_offset": 2048, 00:13:33.253 "data_size": 63488 00:13:33.253 }, 00:13:33.253 { 00:13:33.253 "name": "pt4", 00:13:33.253 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:33.253 "is_configured": true, 00:13:33.253 "data_offset": 2048, 00:13:33.253 "data_size": 63488 00:13:33.253 } 00:13:33.253 ] 00:13:33.253 }' 00:13:33.253 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.253 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 [2024-11-27 09:50:34.716820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:33.823 "name": "raid_bdev1", 00:13:33.823 "aliases": [ 00:13:33.823 "88c8c118-2412-43ff-83c7-4a061c5db748" 00:13:33.823 ], 00:13:33.823 "product_name": "Raid Volume", 00:13:33.823 "block_size": 512, 00:13:33.823 "num_blocks": 63488, 00:13:33.823 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:33.823 "assigned_rate_limits": { 00:13:33.823 "rw_ios_per_sec": 0, 00:13:33.823 "rw_mbytes_per_sec": 0, 00:13:33.823 "r_mbytes_per_sec": 0, 00:13:33.823 "w_mbytes_per_sec": 0 00:13:33.823 }, 00:13:33.823 "claimed": false, 00:13:33.823 "zoned": false, 00:13:33.823 "supported_io_types": { 00:13:33.823 "read": true, 00:13:33.823 "write": true, 00:13:33.823 "unmap": false, 00:13:33.823 "flush": false, 00:13:33.823 "reset": true, 00:13:33.823 "nvme_admin": false, 00:13:33.823 "nvme_io": false, 00:13:33.823 "nvme_io_md": false, 00:13:33.823 "write_zeroes": true, 00:13:33.823 "zcopy": false, 00:13:33.823 "get_zone_info": false, 00:13:33.823 "zone_management": false, 00:13:33.823 "zone_append": false, 00:13:33.823 "compare": false, 00:13:33.823 "compare_and_write": false, 00:13:33.823 "abort": false, 00:13:33.823 "seek_hole": false, 00:13:33.823 "seek_data": false, 00:13:33.823 "copy": false, 00:13:33.823 "nvme_iov_md": false 00:13:33.823 }, 00:13:33.823 "memory_domains": [ 00:13:33.823 { 00:13:33.823 "dma_device_id": "system", 00:13:33.823 "dma_device_type": 1 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.823 "dma_device_type": 2 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "system", 00:13:33.823 "dma_device_type": 1 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.823 "dma_device_type": 2 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "system", 00:13:33.823 "dma_device_type": 1 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.823 "dma_device_type": 2 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "system", 00:13:33.823 "dma_device_type": 1 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.823 "dma_device_type": 2 00:13:33.823 } 00:13:33.823 ], 00:13:33.823 "driver_specific": { 00:13:33.823 "raid": { 00:13:33.823 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:33.823 "strip_size_kb": 0, 00:13:33.823 "state": "online", 00:13:33.823 "raid_level": "raid1", 00:13:33.823 "superblock": true, 00:13:33.823 "num_base_bdevs": 4, 00:13:33.823 "num_base_bdevs_discovered": 4, 00:13:33.823 "num_base_bdevs_operational": 4, 00:13:33.823 "base_bdevs_list": [ 00:13:33.823 { 00:13:33.823 "name": "pt1", 00:13:33.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:33.823 "is_configured": true, 00:13:33.823 "data_offset": 2048, 00:13:33.823 "data_size": 63488 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "name": "pt2", 00:13:33.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:33.823 "is_configured": true, 00:13:33.823 "data_offset": 2048, 00:13:33.823 "data_size": 63488 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "name": "pt3", 00:13:33.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:33.823 "is_configured": true, 00:13:33.823 "data_offset": 2048, 00:13:33.823 "data_size": 63488 00:13:33.823 }, 00:13:33.823 { 00:13:33.823 "name": "pt4", 00:13:33.823 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:33.823 "is_configured": true, 00:13:33.823 "data_offset": 2048, 00:13:33.823 "data_size": 63488 00:13:33.823 } 00:13:33.823 ] 00:13:33.823 } 00:13:33.823 } 00:13:33.823 }' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:33.823 pt2 00:13:33.823 pt3 00:13:33.823 pt4' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:33.823 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 [2024-11-27 09:50:35.000296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=88c8c118-2412-43ff-83c7-4a061c5db748 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 88c8c118-2412-43ff-83c7-4a061c5db748 ']' 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 [2024-11-27 09:50:35.031947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.084 [2024-11-27 09:50:35.031984] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:34.084 [2024-11-27 09:50:35.032075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:34.084 [2024-11-27 09:50:35.032159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:34.084 [2024-11-27 09:50:35.032181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 [2024-11-27 09:50:35.179709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:34.084 [2024-11-27 09:50:35.181639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:34.084 [2024-11-27 09:50:35.181704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:34.084 [2024-11-27 09:50:35.181738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:34.084 [2024-11-27 09:50:35.181786] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:34.084 [2024-11-27 09:50:35.181834] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:34.084 [2024-11-27 09:50:35.181853] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:34.084 [2024-11-27 09:50:35.181870] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:34.084 [2024-11-27 09:50:35.181884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:34.084 [2024-11-27 09:50:35.181895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:34.084 request: 00:13:34.084 { 00:13:34.084 "name": "raid_bdev1", 00:13:34.084 "raid_level": "raid1", 00:13:34.084 "base_bdevs": [ 00:13:34.084 "malloc1", 00:13:34.084 "malloc2", 00:13:34.084 "malloc3", 00:13:34.084 "malloc4" 00:13:34.084 ], 00:13:34.084 "superblock": false, 00:13:34.084 "method": "bdev_raid_create", 00:13:34.084 "req_id": 1 00:13:34.084 } 00:13:34.084 Got JSON-RPC error response 00:13:34.084 response: 00:13:34.084 { 00:13:34.084 "code": -17, 00:13:34.084 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:34.084 } 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.084 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.345 [2024-11-27 09:50:35.247544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:34.345 [2024-11-27 09:50:35.247606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.345 [2024-11-27 09:50:35.247624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:34.345 [2024-11-27 09:50:35.247635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.345 [2024-11-27 09:50:35.249850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.345 [2024-11-27 09:50:35.249891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:34.345 [2024-11-27 09:50:35.249968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:34.345 [2024-11-27 09:50:35.250034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:34.345 pt1 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.345 "name": "raid_bdev1", 00:13:34.345 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:34.345 "strip_size_kb": 0, 00:13:34.345 "state": "configuring", 00:13:34.345 "raid_level": "raid1", 00:13:34.345 "superblock": true, 00:13:34.345 "num_base_bdevs": 4, 00:13:34.345 "num_base_bdevs_discovered": 1, 00:13:34.345 "num_base_bdevs_operational": 4, 00:13:34.345 "base_bdevs_list": [ 00:13:34.345 { 00:13:34.345 "name": "pt1", 00:13:34.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.345 "is_configured": true, 00:13:34.345 "data_offset": 2048, 00:13:34.345 "data_size": 63488 00:13:34.345 }, 00:13:34.345 { 00:13:34.345 "name": null, 00:13:34.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.345 "is_configured": false, 00:13:34.345 "data_offset": 2048, 00:13:34.345 "data_size": 63488 00:13:34.345 }, 00:13:34.345 { 00:13:34.345 "name": null, 00:13:34.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.345 "is_configured": false, 00:13:34.345 "data_offset": 2048, 00:13:34.345 "data_size": 63488 00:13:34.345 }, 00:13:34.345 { 00:13:34.345 "name": null, 00:13:34.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.345 "is_configured": false, 00:13:34.345 "data_offset": 2048, 00:13:34.345 "data_size": 63488 00:13:34.345 } 00:13:34.345 ] 00:13:34.345 }' 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.345 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.606 [2024-11-27 09:50:35.642945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:34.606 [2024-11-27 09:50:35.643048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.606 [2024-11-27 09:50:35.643072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:34.606 [2024-11-27 09:50:35.643083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.606 [2024-11-27 09:50:35.643524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.606 [2024-11-27 09:50:35.643544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:34.606 [2024-11-27 09:50:35.643628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:34.606 [2024-11-27 09:50:35.643653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:34.606 pt2 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.606 [2024-11-27 09:50:35.654899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.606 "name": "raid_bdev1", 00:13:34.606 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:34.606 "strip_size_kb": 0, 00:13:34.606 "state": "configuring", 00:13:34.606 "raid_level": "raid1", 00:13:34.606 "superblock": true, 00:13:34.606 "num_base_bdevs": 4, 00:13:34.606 "num_base_bdevs_discovered": 1, 00:13:34.606 "num_base_bdevs_operational": 4, 00:13:34.606 "base_bdevs_list": [ 00:13:34.606 { 00:13:34.606 "name": "pt1", 00:13:34.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:34.606 "is_configured": true, 00:13:34.606 "data_offset": 2048, 00:13:34.606 "data_size": 63488 00:13:34.606 }, 00:13:34.606 { 00:13:34.606 "name": null, 00:13:34.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:34.606 "is_configured": false, 00:13:34.606 "data_offset": 0, 00:13:34.606 "data_size": 63488 00:13:34.606 }, 00:13:34.606 { 00:13:34.606 "name": null, 00:13:34.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:34.606 "is_configured": false, 00:13:34.606 "data_offset": 2048, 00:13:34.606 "data_size": 63488 00:13:34.606 }, 00:13:34.606 { 00:13:34.606 "name": null, 00:13:34.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:34.606 "is_configured": false, 00:13:34.606 "data_offset": 2048, 00:13:34.606 "data_size": 63488 00:13:34.606 } 00:13:34.606 ] 00:13:34.606 }' 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.606 09:50:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.177 [2024-11-27 09:50:36.086187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:35.177 [2024-11-27 09:50:36.086268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.177 [2024-11-27 09:50:36.086289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:35.177 [2024-11-27 09:50:36.086298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.177 [2024-11-27 09:50:36.086766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.177 [2024-11-27 09:50:36.086785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:35.177 [2024-11-27 09:50:36.086873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:35.177 [2024-11-27 09:50:36.086895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:35.177 pt2 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.177 [2024-11-27 09:50:36.098119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:35.177 [2024-11-27 09:50:36.098171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.177 [2024-11-27 09:50:36.098189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:35.177 [2024-11-27 09:50:36.098197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.177 [2024-11-27 09:50:36.098575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.177 [2024-11-27 09:50:36.098600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:35.177 [2024-11-27 09:50:36.098665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:35.177 [2024-11-27 09:50:36.098683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:35.177 pt3 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.177 [2024-11-27 09:50:36.110071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:35.177 [2024-11-27 09:50:36.110114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.177 [2024-11-27 09:50:36.110130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:35.177 [2024-11-27 09:50:36.110139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.177 [2024-11-27 09:50:36.110510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.177 [2024-11-27 09:50:36.110527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:35.177 [2024-11-27 09:50:36.110588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:35.177 [2024-11-27 09:50:36.110612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:35.177 [2024-11-27 09:50:36.110755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.177 [2024-11-27 09:50:36.110764] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.177 [2024-11-27 09:50:36.111027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.177 [2024-11-27 09:50:36.111179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.177 [2024-11-27 09:50:36.111202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:35.177 [2024-11-27 09:50:36.111342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.177 pt4 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.177 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.177 "name": "raid_bdev1", 00:13:35.177 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:35.177 "strip_size_kb": 0, 00:13:35.177 "state": "online", 00:13:35.177 "raid_level": "raid1", 00:13:35.177 "superblock": true, 00:13:35.177 "num_base_bdevs": 4, 00:13:35.177 "num_base_bdevs_discovered": 4, 00:13:35.177 "num_base_bdevs_operational": 4, 00:13:35.177 "base_bdevs_list": [ 00:13:35.177 { 00:13:35.177 "name": "pt1", 00:13:35.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.178 "is_configured": true, 00:13:35.178 "data_offset": 2048, 00:13:35.178 "data_size": 63488 00:13:35.178 }, 00:13:35.178 { 00:13:35.178 "name": "pt2", 00:13:35.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.178 "is_configured": true, 00:13:35.178 "data_offset": 2048, 00:13:35.178 "data_size": 63488 00:13:35.178 }, 00:13:35.178 { 00:13:35.178 "name": "pt3", 00:13:35.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.178 "is_configured": true, 00:13:35.178 "data_offset": 2048, 00:13:35.178 "data_size": 63488 00:13:35.178 }, 00:13:35.178 { 00:13:35.178 "name": "pt4", 00:13:35.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.178 "is_configured": true, 00:13:35.178 "data_offset": 2048, 00:13:35.178 "data_size": 63488 00:13:35.178 } 00:13:35.178 ] 00:13:35.178 }' 00:13:35.178 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.178 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.446 [2024-11-27 09:50:36.529771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.446 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.447 "name": "raid_bdev1", 00:13:35.447 "aliases": [ 00:13:35.447 "88c8c118-2412-43ff-83c7-4a061c5db748" 00:13:35.447 ], 00:13:35.447 "product_name": "Raid Volume", 00:13:35.447 "block_size": 512, 00:13:35.447 "num_blocks": 63488, 00:13:35.447 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:35.447 "assigned_rate_limits": { 00:13:35.447 "rw_ios_per_sec": 0, 00:13:35.447 "rw_mbytes_per_sec": 0, 00:13:35.447 "r_mbytes_per_sec": 0, 00:13:35.447 "w_mbytes_per_sec": 0 00:13:35.447 }, 00:13:35.447 "claimed": false, 00:13:35.447 "zoned": false, 00:13:35.447 "supported_io_types": { 00:13:35.447 "read": true, 00:13:35.447 "write": true, 00:13:35.447 "unmap": false, 00:13:35.447 "flush": false, 00:13:35.447 "reset": true, 00:13:35.447 "nvme_admin": false, 00:13:35.447 "nvme_io": false, 00:13:35.447 "nvme_io_md": false, 00:13:35.447 "write_zeroes": true, 00:13:35.447 "zcopy": false, 00:13:35.447 "get_zone_info": false, 00:13:35.447 "zone_management": false, 00:13:35.447 "zone_append": false, 00:13:35.447 "compare": false, 00:13:35.447 "compare_and_write": false, 00:13:35.447 "abort": false, 00:13:35.447 "seek_hole": false, 00:13:35.447 "seek_data": false, 00:13:35.447 "copy": false, 00:13:35.447 "nvme_iov_md": false 00:13:35.447 }, 00:13:35.447 "memory_domains": [ 00:13:35.447 { 00:13:35.447 "dma_device_id": "system", 00:13:35.447 "dma_device_type": 1 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.447 "dma_device_type": 2 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "system", 00:13:35.447 "dma_device_type": 1 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.447 "dma_device_type": 2 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "system", 00:13:35.447 "dma_device_type": 1 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.447 "dma_device_type": 2 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "system", 00:13:35.447 "dma_device_type": 1 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.447 "dma_device_type": 2 00:13:35.447 } 00:13:35.447 ], 00:13:35.447 "driver_specific": { 00:13:35.447 "raid": { 00:13:35.447 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:35.447 "strip_size_kb": 0, 00:13:35.447 "state": "online", 00:13:35.447 "raid_level": "raid1", 00:13:35.447 "superblock": true, 00:13:35.447 "num_base_bdevs": 4, 00:13:35.447 "num_base_bdevs_discovered": 4, 00:13:35.447 "num_base_bdevs_operational": 4, 00:13:35.447 "base_bdevs_list": [ 00:13:35.447 { 00:13:35.447 "name": "pt1", 00:13:35.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:35.447 "is_configured": true, 00:13:35.447 "data_offset": 2048, 00:13:35.447 "data_size": 63488 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "name": "pt2", 00:13:35.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:35.447 "is_configured": true, 00:13:35.447 "data_offset": 2048, 00:13:35.447 "data_size": 63488 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "name": "pt3", 00:13:35.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:35.447 "is_configured": true, 00:13:35.447 "data_offset": 2048, 00:13:35.447 "data_size": 63488 00:13:35.447 }, 00:13:35.447 { 00:13:35.447 "name": "pt4", 00:13:35.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:35.447 "is_configured": true, 00:13:35.447 "data_offset": 2048, 00:13:35.447 "data_size": 63488 00:13:35.447 } 00:13:35.447 ] 00:13:35.447 } 00:13:35.447 } 00:13:35.447 }' 00:13:35.447 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:35.761 pt2 00:13:35.761 pt3 00:13:35.761 pt4' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.761 [2024-11-27 09:50:36.837135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 88c8c118-2412-43ff-83c7-4a061c5db748 '!=' 88c8c118-2412-43ff-83c7-4a061c5db748 ']' 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.761 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.761 [2024-11-27 09:50:36.872842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.028 "name": "raid_bdev1", 00:13:36.028 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:36.028 "strip_size_kb": 0, 00:13:36.028 "state": "online", 00:13:36.028 "raid_level": "raid1", 00:13:36.028 "superblock": true, 00:13:36.028 "num_base_bdevs": 4, 00:13:36.028 "num_base_bdevs_discovered": 3, 00:13:36.028 "num_base_bdevs_operational": 3, 00:13:36.028 "base_bdevs_list": [ 00:13:36.028 { 00:13:36.028 "name": null, 00:13:36.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.028 "is_configured": false, 00:13:36.028 "data_offset": 0, 00:13:36.028 "data_size": 63488 00:13:36.028 }, 00:13:36.028 { 00:13:36.028 "name": "pt2", 00:13:36.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.028 "is_configured": true, 00:13:36.028 "data_offset": 2048, 00:13:36.028 "data_size": 63488 00:13:36.028 }, 00:13:36.028 { 00:13:36.028 "name": "pt3", 00:13:36.028 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.028 "is_configured": true, 00:13:36.028 "data_offset": 2048, 00:13:36.028 "data_size": 63488 00:13:36.028 }, 00:13:36.028 { 00:13:36.028 "name": "pt4", 00:13:36.028 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.028 "is_configured": true, 00:13:36.028 "data_offset": 2048, 00:13:36.028 "data_size": 63488 00:13:36.028 } 00:13:36.028 ] 00:13:36.028 }' 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.028 09:50:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.289 [2024-11-27 09:50:37.268127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:36.289 [2024-11-27 09:50:37.268164] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:36.289 [2024-11-27 09:50:37.268234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:36.289 [2024-11-27 09:50:37.268312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:36.289 [2024-11-27 09:50:37.268321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.289 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.290 [2024-11-27 09:50:37.392014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:36.290 [2024-11-27 09:50:37.392058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.290 [2024-11-27 09:50:37.392074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:36.290 [2024-11-27 09:50:37.392083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.290 [2024-11-27 09:50:37.395245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.290 [2024-11-27 09:50:37.395277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:36.290 [2024-11-27 09:50:37.395347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:36.290 [2024-11-27 09:50:37.395393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:36.290 pt2 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.290 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.550 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.550 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.550 "name": "raid_bdev1", 00:13:36.550 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:36.550 "strip_size_kb": 0, 00:13:36.550 "state": "configuring", 00:13:36.550 "raid_level": "raid1", 00:13:36.550 "superblock": true, 00:13:36.550 "num_base_bdevs": 4, 00:13:36.550 "num_base_bdevs_discovered": 1, 00:13:36.550 "num_base_bdevs_operational": 3, 00:13:36.550 "base_bdevs_list": [ 00:13:36.550 { 00:13:36.550 "name": null, 00:13:36.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.550 "is_configured": false, 00:13:36.550 "data_offset": 2048, 00:13:36.550 "data_size": 63488 00:13:36.550 }, 00:13:36.550 { 00:13:36.550 "name": "pt2", 00:13:36.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.550 "is_configured": true, 00:13:36.550 "data_offset": 2048, 00:13:36.550 "data_size": 63488 00:13:36.550 }, 00:13:36.550 { 00:13:36.550 "name": null, 00:13:36.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.550 "is_configured": false, 00:13:36.550 "data_offset": 2048, 00:13:36.550 "data_size": 63488 00:13:36.550 }, 00:13:36.550 { 00:13:36.550 "name": null, 00:13:36.550 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.550 "is_configured": false, 00:13:36.550 "data_offset": 2048, 00:13:36.550 "data_size": 63488 00:13:36.550 } 00:13:36.550 ] 00:13:36.550 }' 00:13:36.550 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.550 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.810 [2024-11-27 09:50:37.783322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:36.810 [2024-11-27 09:50:37.783363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:36.810 [2024-11-27 09:50:37.783379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:36.810 [2024-11-27 09:50:37.783387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:36.810 [2024-11-27 09:50:37.783717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:36.810 [2024-11-27 09:50:37.783743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:36.810 [2024-11-27 09:50:37.783802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:36.810 [2024-11-27 09:50:37.783822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:36.810 pt3 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.810 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.811 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.811 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.811 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.811 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.811 "name": "raid_bdev1", 00:13:36.811 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:36.811 "strip_size_kb": 0, 00:13:36.811 "state": "configuring", 00:13:36.811 "raid_level": "raid1", 00:13:36.811 "superblock": true, 00:13:36.811 "num_base_bdevs": 4, 00:13:36.811 "num_base_bdevs_discovered": 2, 00:13:36.811 "num_base_bdevs_operational": 3, 00:13:36.811 "base_bdevs_list": [ 00:13:36.811 { 00:13:36.811 "name": null, 00:13:36.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.811 "is_configured": false, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 }, 00:13:36.811 { 00:13:36.811 "name": "pt2", 00:13:36.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:36.811 "is_configured": true, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 }, 00:13:36.811 { 00:13:36.811 "name": "pt3", 00:13:36.811 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:36.811 "is_configured": true, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 }, 00:13:36.811 { 00:13:36.811 "name": null, 00:13:36.811 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:36.811 "is_configured": false, 00:13:36.811 "data_offset": 2048, 00:13:36.811 "data_size": 63488 00:13:36.811 } 00:13:36.811 ] 00:13:36.811 }' 00:13:36.811 09:50:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.811 09:50:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.380 [2024-11-27 09:50:38.218661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:37.380 [2024-11-27 09:50:38.218735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.380 [2024-11-27 09:50:38.218762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:37.380 [2024-11-27 09:50:38.218771] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.380 [2024-11-27 09:50:38.219207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.380 [2024-11-27 09:50:38.219232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:37.380 [2024-11-27 09:50:38.219321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:37.380 [2024-11-27 09:50:38.219349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:37.380 [2024-11-27 09:50:38.219466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:37.380 [2024-11-27 09:50:38.219481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:37.380 [2024-11-27 09:50:38.219711] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:37.380 [2024-11-27 09:50:38.219867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:37.380 [2024-11-27 09:50:38.219885] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:37.380 [2024-11-27 09:50:38.220029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.380 pt4 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.380 "name": "raid_bdev1", 00:13:37.380 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:37.380 "strip_size_kb": 0, 00:13:37.380 "state": "online", 00:13:37.380 "raid_level": "raid1", 00:13:37.380 "superblock": true, 00:13:37.380 "num_base_bdevs": 4, 00:13:37.380 "num_base_bdevs_discovered": 3, 00:13:37.380 "num_base_bdevs_operational": 3, 00:13:37.380 "base_bdevs_list": [ 00:13:37.380 { 00:13:37.380 "name": null, 00:13:37.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.380 "is_configured": false, 00:13:37.380 "data_offset": 2048, 00:13:37.380 "data_size": 63488 00:13:37.380 }, 00:13:37.380 { 00:13:37.380 "name": "pt2", 00:13:37.380 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.380 "is_configured": true, 00:13:37.380 "data_offset": 2048, 00:13:37.380 "data_size": 63488 00:13:37.380 }, 00:13:37.380 { 00:13:37.380 "name": "pt3", 00:13:37.380 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.380 "is_configured": true, 00:13:37.380 "data_offset": 2048, 00:13:37.380 "data_size": 63488 00:13:37.380 }, 00:13:37.380 { 00:13:37.380 "name": "pt4", 00:13:37.380 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.380 "is_configured": true, 00:13:37.380 "data_offset": 2048, 00:13:37.380 "data_size": 63488 00:13:37.380 } 00:13:37.380 ] 00:13:37.380 }' 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.380 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.640 [2024-11-27 09:50:38.653863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.640 [2024-11-27 09:50:38.653917] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.640 [2024-11-27 09:50:38.653990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.640 [2024-11-27 09:50:38.654075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.640 [2024-11-27 09:50:38.654094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.640 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.641 [2024-11-27 09:50:38.725731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:37.641 [2024-11-27 09:50:38.725784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.641 [2024-11-27 09:50:38.725800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:37.641 [2024-11-27 09:50:38.725811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.641 [2024-11-27 09:50:38.727839] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.641 [2024-11-27 09:50:38.727873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:37.641 [2024-11-27 09:50:38.727941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:37.641 [2024-11-27 09:50:38.727981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:37.641 [2024-11-27 09:50:38.728116] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:37.641 [2024-11-27 09:50:38.728137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:37.641 [2024-11-27 09:50:38.728150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:37.641 [2024-11-27 09:50:38.728210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:37.641 [2024-11-27 09:50:38.728317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:37.641 pt1 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.641 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.901 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.901 "name": "raid_bdev1", 00:13:37.901 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:37.901 "strip_size_kb": 0, 00:13:37.901 "state": "configuring", 00:13:37.901 "raid_level": "raid1", 00:13:37.901 "superblock": true, 00:13:37.901 "num_base_bdevs": 4, 00:13:37.901 "num_base_bdevs_discovered": 2, 00:13:37.901 "num_base_bdevs_operational": 3, 00:13:37.901 "base_bdevs_list": [ 00:13:37.901 { 00:13:37.901 "name": null, 00:13:37.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.901 "is_configured": false, 00:13:37.901 "data_offset": 2048, 00:13:37.901 "data_size": 63488 00:13:37.901 }, 00:13:37.901 { 00:13:37.901 "name": "pt2", 00:13:37.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:37.901 "is_configured": true, 00:13:37.901 "data_offset": 2048, 00:13:37.901 "data_size": 63488 00:13:37.901 }, 00:13:37.901 { 00:13:37.901 "name": "pt3", 00:13:37.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:37.901 "is_configured": true, 00:13:37.901 "data_offset": 2048, 00:13:37.901 "data_size": 63488 00:13:37.901 }, 00:13:37.901 { 00:13:37.901 "name": null, 00:13:37.901 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:37.901 "is_configured": false, 00:13:37.901 "data_offset": 2048, 00:13:37.901 "data_size": 63488 00:13:37.901 } 00:13:37.901 ] 00:13:37.901 }' 00:13:37.901 09:50:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.901 09:50:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.160 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.161 [2024-11-27 09:50:39.172978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:38.161 [2024-11-27 09:50:39.173052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.161 [2024-11-27 09:50:39.173074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:38.161 [2024-11-27 09:50:39.173083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.161 [2024-11-27 09:50:39.173473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.161 [2024-11-27 09:50:39.173489] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:38.161 [2024-11-27 09:50:39.173564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:38.161 [2024-11-27 09:50:39.173584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:38.161 [2024-11-27 09:50:39.173714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:38.161 [2024-11-27 09:50:39.173721] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.161 [2024-11-27 09:50:39.173949] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:38.161 [2024-11-27 09:50:39.174106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:38.161 [2024-11-27 09:50:39.174118] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:38.161 [2024-11-27 09:50:39.174248] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.161 pt4 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.161 "name": "raid_bdev1", 00:13:38.161 "uuid": "88c8c118-2412-43ff-83c7-4a061c5db748", 00:13:38.161 "strip_size_kb": 0, 00:13:38.161 "state": "online", 00:13:38.161 "raid_level": "raid1", 00:13:38.161 "superblock": true, 00:13:38.161 "num_base_bdevs": 4, 00:13:38.161 "num_base_bdevs_discovered": 3, 00:13:38.161 "num_base_bdevs_operational": 3, 00:13:38.161 "base_bdevs_list": [ 00:13:38.161 { 00:13:38.161 "name": null, 00:13:38.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.161 "is_configured": false, 00:13:38.161 "data_offset": 2048, 00:13:38.161 "data_size": 63488 00:13:38.161 }, 00:13:38.161 { 00:13:38.161 "name": "pt2", 00:13:38.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.161 "is_configured": true, 00:13:38.161 "data_offset": 2048, 00:13:38.161 "data_size": 63488 00:13:38.161 }, 00:13:38.161 { 00:13:38.161 "name": "pt3", 00:13:38.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.161 "is_configured": true, 00:13:38.161 "data_offset": 2048, 00:13:38.161 "data_size": 63488 00:13:38.161 }, 00:13:38.161 { 00:13:38.161 "name": "pt4", 00:13:38.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:38.161 "is_configured": true, 00:13:38.161 "data_offset": 2048, 00:13:38.161 "data_size": 63488 00:13:38.161 } 00:13:38.161 ] 00:13:38.161 }' 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.161 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:38.730 [2024-11-27 09:50:39.632506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 88c8c118-2412-43ff-83c7-4a061c5db748 '!=' 88c8c118-2412-43ff-83c7-4a061c5db748 ']' 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74799 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74799 ']' 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74799 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74799 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.730 killing process with pid 74799 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74799' 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74799 00:13:38.730 [2024-11-27 09:50:39.683142] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.730 [2024-11-27 09:50:39.683235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.730 [2024-11-27 09:50:39.683313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.730 [2024-11-27 09:50:39.683328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:38.730 09:50:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74799 00:13:38.990 [2024-11-27 09:50:40.084792] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.370 09:50:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:40.370 00:13:40.370 real 0m8.085s 00:13:40.370 user 0m12.613s 00:13:40.370 sys 0m1.497s 00:13:40.370 09:50:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.370 09:50:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.370 ************************************ 00:13:40.370 END TEST raid_superblock_test 00:13:40.370 ************************************ 00:13:40.370 09:50:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:40.370 09:50:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:40.370 09:50:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.370 09:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.370 ************************************ 00:13:40.370 START TEST raid_read_error_test 00:13:40.370 ************************************ 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4wK4gfoGLg 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75279 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75279 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75279 ']' 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.370 09:50:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.370 [2024-11-27 09:50:41.379166] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:40.370 [2024-11-27 09:50:41.379300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75279 ] 00:13:40.631 [2024-11-27 09:50:41.561523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.631 [2024-11-27 09:50:41.676807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.890 [2024-11-27 09:50:41.868269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.890 [2024-11-27 09:50:41.868353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.150 BaseBdev1_malloc 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.150 true 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.150 [2024-11-27 09:50:42.260303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:41.150 [2024-11-27 09:50:42.260378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.150 [2024-11-27 09:50:42.260398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:41.150 [2024-11-27 09:50:42.260408] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.150 [2024-11-27 09:50:42.262398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.150 [2024-11-27 09:50:42.262434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.150 BaseBdev1 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.150 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.410 BaseBdev2_malloc 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.410 true 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.410 [2024-11-27 09:50:42.328548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:41.410 [2024-11-27 09:50:42.328613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.410 [2024-11-27 09:50:42.328633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:41.410 [2024-11-27 09:50:42.328645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.410 [2024-11-27 09:50:42.330824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.410 [2024-11-27 09:50:42.330860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.410 BaseBdev2 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.410 BaseBdev3_malloc 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.410 true 00:13:41.410 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 [2024-11-27 09:50:42.409484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:41.411 [2024-11-27 09:50:42.409567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.411 [2024-11-27 09:50:42.409586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:41.411 [2024-11-27 09:50:42.409597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.411 [2024-11-27 09:50:42.411698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.411 [2024-11-27 09:50:42.411732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:41.411 BaseBdev3 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 BaseBdev4_malloc 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 true 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 [2024-11-27 09:50:42.478191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:41.411 [2024-11-27 09:50:42.478245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.411 [2024-11-27 09:50:42.478263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:41.411 [2024-11-27 09:50:42.478273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.411 [2024-11-27 09:50:42.480271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.411 [2024-11-27 09:50:42.480321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:41.411 BaseBdev4 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 [2024-11-27 09:50:42.490227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.411 [2024-11-27 09:50:42.491961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.411 [2024-11-27 09:50:42.492046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.411 [2024-11-27 09:50:42.492104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.411 [2024-11-27 09:50:42.492348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:41.411 [2024-11-27 09:50:42.492392] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.411 [2024-11-27 09:50:42.492631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:41.411 [2024-11-27 09:50:42.492801] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:41.411 [2024-11-27 09:50:42.492823] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:41.411 [2024-11-27 09:50:42.492978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.670 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.670 "name": "raid_bdev1", 00:13:41.670 "uuid": "76a9e9e6-4095-475d-95f7-4840def5095c", 00:13:41.670 "strip_size_kb": 0, 00:13:41.670 "state": "online", 00:13:41.670 "raid_level": "raid1", 00:13:41.670 "superblock": true, 00:13:41.670 "num_base_bdevs": 4, 00:13:41.670 "num_base_bdevs_discovered": 4, 00:13:41.670 "num_base_bdevs_operational": 4, 00:13:41.670 "base_bdevs_list": [ 00:13:41.670 { 00:13:41.670 "name": "BaseBdev1", 00:13:41.670 "uuid": "52c376fe-c87c-5664-b19a-5404ca94bb49", 00:13:41.670 "is_configured": true, 00:13:41.670 "data_offset": 2048, 00:13:41.670 "data_size": 63488 00:13:41.670 }, 00:13:41.670 { 00:13:41.670 "name": "BaseBdev2", 00:13:41.670 "uuid": "1f5c361f-e810-5cc2-8500-e21405a9ddf9", 00:13:41.670 "is_configured": true, 00:13:41.670 "data_offset": 2048, 00:13:41.670 "data_size": 63488 00:13:41.670 }, 00:13:41.670 { 00:13:41.670 "name": "BaseBdev3", 00:13:41.670 "uuid": "57b2c629-f639-582e-b1bc-732a65099685", 00:13:41.670 "is_configured": true, 00:13:41.670 "data_offset": 2048, 00:13:41.670 "data_size": 63488 00:13:41.670 }, 00:13:41.670 { 00:13:41.670 "name": "BaseBdev4", 00:13:41.670 "uuid": "737ebd48-5a48-5813-b21d-6e031b973c16", 00:13:41.670 "is_configured": true, 00:13:41.670 "data_offset": 2048, 00:13:41.670 "data_size": 63488 00:13:41.670 } 00:13:41.670 ] 00:13:41.670 }' 00:13:41.670 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.670 09:50:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.930 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:41.930 09:50:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:41.930 [2024-11-27 09:50:42.990751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.869 "name": "raid_bdev1", 00:13:42.869 "uuid": "76a9e9e6-4095-475d-95f7-4840def5095c", 00:13:42.869 "strip_size_kb": 0, 00:13:42.869 "state": "online", 00:13:42.869 "raid_level": "raid1", 00:13:42.869 "superblock": true, 00:13:42.869 "num_base_bdevs": 4, 00:13:42.869 "num_base_bdevs_discovered": 4, 00:13:42.869 "num_base_bdevs_operational": 4, 00:13:42.869 "base_bdevs_list": [ 00:13:42.869 { 00:13:42.869 "name": "BaseBdev1", 00:13:42.869 "uuid": "52c376fe-c87c-5664-b19a-5404ca94bb49", 00:13:42.869 "is_configured": true, 00:13:42.869 "data_offset": 2048, 00:13:42.869 "data_size": 63488 00:13:42.869 }, 00:13:42.869 { 00:13:42.869 "name": "BaseBdev2", 00:13:42.869 "uuid": "1f5c361f-e810-5cc2-8500-e21405a9ddf9", 00:13:42.869 "is_configured": true, 00:13:42.869 "data_offset": 2048, 00:13:42.869 "data_size": 63488 00:13:42.869 }, 00:13:42.869 { 00:13:42.869 "name": "BaseBdev3", 00:13:42.869 "uuid": "57b2c629-f639-582e-b1bc-732a65099685", 00:13:42.869 "is_configured": true, 00:13:42.869 "data_offset": 2048, 00:13:42.869 "data_size": 63488 00:13:42.869 }, 00:13:42.869 { 00:13:42.869 "name": "BaseBdev4", 00:13:42.869 "uuid": "737ebd48-5a48-5813-b21d-6e031b973c16", 00:13:42.869 "is_configured": true, 00:13:42.869 "data_offset": 2048, 00:13:42.869 "data_size": 63488 00:13:42.869 } 00:13:42.869 ] 00:13:42.869 }' 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.869 09:50:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.440 [2024-11-27 09:50:44.367737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.440 [2024-11-27 09:50:44.367791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.440 [2024-11-27 09:50:44.370405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.440 [2024-11-27 09:50:44.370469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.440 [2024-11-27 09:50:44.370582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.440 [2024-11-27 09:50:44.370600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:43.440 { 00:13:43.440 "results": [ 00:13:43.440 { 00:13:43.440 "job": "raid_bdev1", 00:13:43.440 "core_mask": "0x1", 00:13:43.440 "workload": "randrw", 00:13:43.440 "percentage": 50, 00:13:43.440 "status": "finished", 00:13:43.440 "queue_depth": 1, 00:13:43.440 "io_size": 131072, 00:13:43.440 "runtime": 1.377888, 00:13:43.440 "iops": 10693.902552311944, 00:13:43.440 "mibps": 1336.737819038993, 00:13:43.440 "io_failed": 0, 00:13:43.440 "io_timeout": 0, 00:13:43.440 "avg_latency_us": 90.86347042288583, 00:13:43.440 "min_latency_us": 23.811353711790392, 00:13:43.440 "max_latency_us": 1366.5257641921398 00:13:43.440 } 00:13:43.440 ], 00:13:43.440 "core_count": 1 00:13:43.440 } 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75279 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75279 ']' 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75279 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75279 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.440 killing process with pid 75279 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75279' 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75279 00:13:43.440 [2024-11-27 09:50:44.406936] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.440 09:50:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75279 00:13:43.700 [2024-11-27 09:50:44.732216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4wK4gfoGLg 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:45.083 00:13:45.083 real 0m4.677s 00:13:45.083 user 0m5.466s 00:13:45.083 sys 0m0.601s 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.083 09:50:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.083 ************************************ 00:13:45.083 END TEST raid_read_error_test 00:13:45.083 ************************************ 00:13:45.083 09:50:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:45.083 09:50:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:45.083 09:50:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.083 09:50:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.083 ************************************ 00:13:45.083 START TEST raid_write_error_test 00:13:45.083 ************************************ 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hwgXDncLTe 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75425 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75425 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75425 ']' 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.083 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.083 [2024-11-27 09:50:46.146223] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:45.083 [2024-11-27 09:50:46.146402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75425 ] 00:13:45.343 [2024-11-27 09:50:46.325290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.343 [2024-11-27 09:50:46.446187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.603 [2024-11-27 09:50:46.640928] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.603 [2024-11-27 09:50:46.641009] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.863 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.863 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.863 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:45.863 09:50:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.863 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.863 09:50:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 BaseBdev1_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 true 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 [2024-11-27 09:50:47.052462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:46.124 [2024-11-27 09:50:47.052623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.124 [2024-11-27 09:50:47.052662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:46.124 [2024-11-27 09:50:47.052694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.124 [2024-11-27 09:50:47.054752] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.124 [2024-11-27 09:50:47.054830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:46.124 BaseBdev1 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 BaseBdev2_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 true 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 [2024-11-27 09:50:47.120591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:46.124 [2024-11-27 09:50:47.120745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.124 [2024-11-27 09:50:47.120779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:46.124 [2024-11-27 09:50:47.120811] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.124 [2024-11-27 09:50:47.122880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.124 [2024-11-27 09:50:47.122956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:46.124 BaseBdev2 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.124 BaseBdev3_malloc 00:13:46.124 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.125 true 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.125 [2024-11-27 09:50:47.202439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:46.125 [2024-11-27 09:50:47.202588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.125 [2024-11-27 09:50:47.202624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:46.125 [2024-11-27 09:50:47.202654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.125 [2024-11-27 09:50:47.204732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.125 [2024-11-27 09:50:47.204814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:46.125 BaseBdev3 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.125 BaseBdev4_malloc 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.125 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.385 true 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.385 [2024-11-27 09:50:47.271316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:46.385 [2024-11-27 09:50:47.271469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:46.385 [2024-11-27 09:50:47.271492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:46.385 [2024-11-27 09:50:47.271504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:46.385 [2024-11-27 09:50:47.273651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:46.385 [2024-11-27 09:50:47.273693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:46.385 BaseBdev4 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.385 [2024-11-27 09:50:47.283349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:46.385 [2024-11-27 09:50:47.285228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.385 [2024-11-27 09:50:47.285349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:46.385 [2024-11-27 09:50:47.285435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:46.385 [2024-11-27 09:50:47.285711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:46.385 [2024-11-27 09:50:47.285762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:46.385 [2024-11-27 09:50:47.286027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:46.385 [2024-11-27 09:50:47.286227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:46.385 [2024-11-27 09:50:47.286268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:46.385 [2024-11-27 09:50:47.286465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.385 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.386 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.386 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.386 "name": "raid_bdev1", 00:13:46.386 "uuid": "885eb0e7-27f3-4b66-83a6-85794908f833", 00:13:46.386 "strip_size_kb": 0, 00:13:46.386 "state": "online", 00:13:46.386 "raid_level": "raid1", 00:13:46.386 "superblock": true, 00:13:46.386 "num_base_bdevs": 4, 00:13:46.386 "num_base_bdevs_discovered": 4, 00:13:46.386 "num_base_bdevs_operational": 4, 00:13:46.386 "base_bdevs_list": [ 00:13:46.386 { 00:13:46.386 "name": "BaseBdev1", 00:13:46.386 "uuid": "ea4e903e-b73f-5a5b-933e-a53fa96e5c80", 00:13:46.386 "is_configured": true, 00:13:46.386 "data_offset": 2048, 00:13:46.386 "data_size": 63488 00:13:46.386 }, 00:13:46.386 { 00:13:46.386 "name": "BaseBdev2", 00:13:46.386 "uuid": "af678a0d-a8ba-57ad-a3ab-f953d9c150fe", 00:13:46.386 "is_configured": true, 00:13:46.386 "data_offset": 2048, 00:13:46.386 "data_size": 63488 00:13:46.386 }, 00:13:46.386 { 00:13:46.386 "name": "BaseBdev3", 00:13:46.386 "uuid": "98a0a3e6-8cf9-5802-ac07-14b82ab71a1c", 00:13:46.386 "is_configured": true, 00:13:46.386 "data_offset": 2048, 00:13:46.386 "data_size": 63488 00:13:46.386 }, 00:13:46.386 { 00:13:46.386 "name": "BaseBdev4", 00:13:46.386 "uuid": "e9ae43ec-f0d1-5de8-acec-7aa123e2aa83", 00:13:46.386 "is_configured": true, 00:13:46.386 "data_offset": 2048, 00:13:46.386 "data_size": 63488 00:13:46.386 } 00:13:46.386 ] 00:13:46.386 }' 00:13:46.386 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.386 09:50:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.645 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.645 09:50:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:46.904 [2024-11-27 09:50:47.787901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.842 [2024-11-27 09:50:48.703050] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:47.842 [2024-11-27 09:50:48.703235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.842 [2024-11-27 09:50:48.703486] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.842 "name": "raid_bdev1", 00:13:47.842 "uuid": "885eb0e7-27f3-4b66-83a6-85794908f833", 00:13:47.842 "strip_size_kb": 0, 00:13:47.842 "state": "online", 00:13:47.842 "raid_level": "raid1", 00:13:47.842 "superblock": true, 00:13:47.842 "num_base_bdevs": 4, 00:13:47.842 "num_base_bdevs_discovered": 3, 00:13:47.842 "num_base_bdevs_operational": 3, 00:13:47.842 "base_bdevs_list": [ 00:13:47.842 { 00:13:47.842 "name": null, 00:13:47.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.842 "is_configured": false, 00:13:47.842 "data_offset": 0, 00:13:47.842 "data_size": 63488 00:13:47.842 }, 00:13:47.842 { 00:13:47.842 "name": "BaseBdev2", 00:13:47.842 "uuid": "af678a0d-a8ba-57ad-a3ab-f953d9c150fe", 00:13:47.842 "is_configured": true, 00:13:47.842 "data_offset": 2048, 00:13:47.842 "data_size": 63488 00:13:47.842 }, 00:13:47.842 { 00:13:47.842 "name": "BaseBdev3", 00:13:47.842 "uuid": "98a0a3e6-8cf9-5802-ac07-14b82ab71a1c", 00:13:47.842 "is_configured": true, 00:13:47.842 "data_offset": 2048, 00:13:47.842 "data_size": 63488 00:13:47.842 }, 00:13:47.842 { 00:13:47.842 "name": "BaseBdev4", 00:13:47.842 "uuid": "e9ae43ec-f0d1-5de8-acec-7aa123e2aa83", 00:13:47.842 "is_configured": true, 00:13:47.842 "data_offset": 2048, 00:13:47.842 "data_size": 63488 00:13:47.842 } 00:13:47.842 ] 00:13:47.842 }' 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.842 09:50:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.102 [2024-11-27 09:50:49.191763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.102 [2024-11-27 09:50:49.191902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.102 [2024-11-27 09:50:49.194808] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.102 [2024-11-27 09:50:49.194898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.102 [2024-11-27 09:50:49.195032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.102 [2024-11-27 09:50:49.195086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:48.102 { 00:13:48.102 "results": [ 00:13:48.102 { 00:13:48.102 "job": "raid_bdev1", 00:13:48.102 "core_mask": "0x1", 00:13:48.102 "workload": "randrw", 00:13:48.102 "percentage": 50, 00:13:48.102 "status": "finished", 00:13:48.102 "queue_depth": 1, 00:13:48.102 "io_size": 131072, 00:13:48.102 "runtime": 1.404986, 00:13:48.102 "iops": 11331.785512453505, 00:13:48.102 "mibps": 1416.4731890566882, 00:13:48.102 "io_failed": 0, 00:13:48.102 "io_timeout": 0, 00:13:48.102 "avg_latency_us": 85.48820807101878, 00:13:48.102 "min_latency_us": 23.699563318777294, 00:13:48.102 "max_latency_us": 1717.1004366812226 00:13:48.102 } 00:13:48.102 ], 00:13:48.102 "core_count": 1 00:13:48.102 } 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75425 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75425 ']' 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75425 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.102 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75425 00:13:48.360 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.360 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.360 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75425' 00:13:48.360 killing process with pid 75425 00:13:48.360 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75425 00:13:48.360 [2024-11-27 09:50:49.236954] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.360 09:50:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75425 00:13:48.619 [2024-11-27 09:50:49.573058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hwgXDncLTe 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:49.998 ************************************ 00:13:49.998 END TEST raid_write_error_test 00:13:49.998 ************************************ 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:49.998 00:13:49.998 real 0m4.778s 00:13:49.998 user 0m5.615s 00:13:49.998 sys 0m0.630s 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.998 09:50:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.998 09:50:50 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:49.998 09:50:50 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:49.998 09:50:50 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:49.998 09:50:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:49.998 09:50:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.998 09:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.998 ************************************ 00:13:49.998 START TEST raid_rebuild_test 00:13:49.998 ************************************ 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75575 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75575 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75575 ']' 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:49.998 09:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.998 [2024-11-27 09:50:50.968199] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:13:49.998 [2024-11-27 09:50:50.968412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:49.998 Zero copy mechanism will not be used. 00:13:49.998 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75575 ] 00:13:50.257 [2024-11-27 09:50:51.130524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.257 [2024-11-27 09:50:51.249459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.515 [2024-11-27 09:50:51.456677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.515 [2024-11-27 09:50:51.456750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.773 BaseBdev1_malloc 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.773 [2024-11-27 09:50:51.858954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.773 [2024-11-27 09:50:51.859168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.773 [2024-11-27 09:50:51.859217] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:50.773 [2024-11-27 09:50:51.859258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.773 [2024-11-27 09:50:51.861665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.773 [2024-11-27 09:50:51.861775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.773 BaseBdev1 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.773 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.032 BaseBdev2_malloc 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.032 [2024-11-27 09:50:51.917336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:51.032 [2024-11-27 09:50:51.917437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.032 [2024-11-27 09:50:51.917467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:51.032 [2024-11-27 09:50:51.917478] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.032 [2024-11-27 09:50:51.919713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.032 [2024-11-27 09:50:51.919763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.032 BaseBdev2 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.032 spare_malloc 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.032 spare_delay 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.032 09:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.032 [2024-11-27 09:50:51.999169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:51.032 [2024-11-27 09:50:51.999376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.032 [2024-11-27 09:50:51.999427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:51.032 [2024-11-27 09:50:51.999470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.033 [2024-11-27 09:50:52.001832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.033 [2024-11-27 09:50:52.001950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:51.033 spare 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.033 [2024-11-27 09:50:52.011216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.033 [2024-11-27 09:50:52.013261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.033 [2024-11-27 09:50:52.013445] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:51.033 [2024-11-27 09:50:52.013483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:51.033 [2024-11-27 09:50:52.013835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:51.033 [2024-11-27 09:50:52.014076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:51.033 [2024-11-27 09:50:52.014125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:51.033 [2024-11-27 09:50:52.014374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.033 "name": "raid_bdev1", 00:13:51.033 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:51.033 "strip_size_kb": 0, 00:13:51.033 "state": "online", 00:13:51.033 "raid_level": "raid1", 00:13:51.033 "superblock": false, 00:13:51.033 "num_base_bdevs": 2, 00:13:51.033 "num_base_bdevs_discovered": 2, 00:13:51.033 "num_base_bdevs_operational": 2, 00:13:51.033 "base_bdevs_list": [ 00:13:51.033 { 00:13:51.033 "name": "BaseBdev1", 00:13:51.033 "uuid": "f1d65eee-e8ac-5d3d-95a5-176ebd385a50", 00:13:51.033 "is_configured": true, 00:13:51.033 "data_offset": 0, 00:13:51.033 "data_size": 65536 00:13:51.033 }, 00:13:51.033 { 00:13:51.033 "name": "BaseBdev2", 00:13:51.033 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:51.033 "is_configured": true, 00:13:51.033 "data_offset": 0, 00:13:51.033 "data_size": 65536 00:13:51.033 } 00:13:51.033 ] 00:13:51.033 }' 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.033 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.600 [2024-11-27 09:50:52.430776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.600 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:51.600 [2024-11-27 09:50:52.702171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:51.600 /dev/nbd0 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.859 1+0 records in 00:13:51.859 1+0 records out 00:13:51.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487675 s, 8.4 MB/s 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:51.859 09:50:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:56.049 65536+0 records in 00:13:56.049 65536+0 records out 00:13:56.049 33554432 bytes (34 MB, 32 MiB) copied, 4.40215 s, 7.6 MB/s 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.308 [2024-11-27 09:50:57.410348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.308 [2024-11-27 09:50:57.426477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.308 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.309 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.309 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.309 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.309 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.309 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.309 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.566 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.566 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.566 "name": "raid_bdev1", 00:13:56.566 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:56.566 "strip_size_kb": 0, 00:13:56.566 "state": "online", 00:13:56.566 "raid_level": "raid1", 00:13:56.566 "superblock": false, 00:13:56.566 "num_base_bdevs": 2, 00:13:56.566 "num_base_bdevs_discovered": 1, 00:13:56.566 "num_base_bdevs_operational": 1, 00:13:56.566 "base_bdevs_list": [ 00:13:56.567 { 00:13:56.567 "name": null, 00:13:56.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.567 "is_configured": false, 00:13:56.567 "data_offset": 0, 00:13:56.567 "data_size": 65536 00:13:56.567 }, 00:13:56.567 { 00:13:56.567 "name": "BaseBdev2", 00:13:56.567 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:56.567 "is_configured": true, 00:13:56.567 "data_offset": 0, 00:13:56.567 "data_size": 65536 00:13:56.567 } 00:13:56.567 ] 00:13:56.567 }' 00:13:56.567 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.567 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.825 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.825 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.825 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.825 [2024-11-27 09:50:57.869746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.825 [2024-11-27 09:50:57.889571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:56.825 09:50:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.825 09:50:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:56.825 [2024-11-27 09:50:57.892107] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.203 "name": "raid_bdev1", 00:13:58.203 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:58.203 "strip_size_kb": 0, 00:13:58.203 "state": "online", 00:13:58.203 "raid_level": "raid1", 00:13:58.203 "superblock": false, 00:13:58.203 "num_base_bdevs": 2, 00:13:58.203 "num_base_bdevs_discovered": 2, 00:13:58.203 "num_base_bdevs_operational": 2, 00:13:58.203 "process": { 00:13:58.203 "type": "rebuild", 00:13:58.203 "target": "spare", 00:13:58.203 "progress": { 00:13:58.203 "blocks": 20480, 00:13:58.203 "percent": 31 00:13:58.203 } 00:13:58.203 }, 00:13:58.203 "base_bdevs_list": [ 00:13:58.203 { 00:13:58.203 "name": "spare", 00:13:58.203 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:13:58.203 "is_configured": true, 00:13:58.203 "data_offset": 0, 00:13:58.203 "data_size": 65536 00:13:58.203 }, 00:13:58.203 { 00:13:58.203 "name": "BaseBdev2", 00:13:58.203 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:58.203 "is_configured": true, 00:13:58.203 "data_offset": 0, 00:13:58.203 "data_size": 65536 00:13:58.203 } 00:13:58.203 ] 00:13:58.203 }' 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:58.203 09:50:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.203 [2024-11-27 09:50:59.043883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.203 [2024-11-27 09:50:59.102766] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:58.203 [2024-11-27 09:50:59.103030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.203 [2024-11-27 09:50:59.103077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:58.203 [2024-11-27 09:50:59.103125] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.203 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.204 "name": "raid_bdev1", 00:13:58.204 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:58.204 "strip_size_kb": 0, 00:13:58.204 "state": "online", 00:13:58.204 "raid_level": "raid1", 00:13:58.204 "superblock": false, 00:13:58.204 "num_base_bdevs": 2, 00:13:58.204 "num_base_bdevs_discovered": 1, 00:13:58.204 "num_base_bdevs_operational": 1, 00:13:58.204 "base_bdevs_list": [ 00:13:58.204 { 00:13:58.204 "name": null, 00:13:58.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.204 "is_configured": false, 00:13:58.204 "data_offset": 0, 00:13:58.204 "data_size": 65536 00:13:58.204 }, 00:13:58.204 { 00:13:58.204 "name": "BaseBdev2", 00:13:58.204 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:58.204 "is_configured": true, 00:13:58.204 "data_offset": 0, 00:13:58.204 "data_size": 65536 00:13:58.204 } 00:13:58.204 ] 00:13:58.204 }' 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.204 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.462 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.462 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.462 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.463 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.463 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.463 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.463 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.463 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.463 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.722 "name": "raid_bdev1", 00:13:58.722 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:58.722 "strip_size_kb": 0, 00:13:58.722 "state": "online", 00:13:58.722 "raid_level": "raid1", 00:13:58.722 "superblock": false, 00:13:58.722 "num_base_bdevs": 2, 00:13:58.722 "num_base_bdevs_discovered": 1, 00:13:58.722 "num_base_bdevs_operational": 1, 00:13:58.722 "base_bdevs_list": [ 00:13:58.722 { 00:13:58.722 "name": null, 00:13:58.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.722 "is_configured": false, 00:13:58.722 "data_offset": 0, 00:13:58.722 "data_size": 65536 00:13:58.722 }, 00:13:58.722 { 00:13:58.722 "name": "BaseBdev2", 00:13:58.722 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:58.722 "is_configured": true, 00:13:58.722 "data_offset": 0, 00:13:58.722 "data_size": 65536 00:13:58.722 } 00:13:58.722 ] 00:13:58.722 }' 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.722 [2024-11-27 09:50:59.731150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.722 [2024-11-27 09:50:59.748661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.722 09:50:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:58.722 [2024-11-27 09:50:59.750950] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.661 09:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.921 "name": "raid_bdev1", 00:13:59.921 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:59.921 "strip_size_kb": 0, 00:13:59.921 "state": "online", 00:13:59.921 "raid_level": "raid1", 00:13:59.921 "superblock": false, 00:13:59.921 "num_base_bdevs": 2, 00:13:59.921 "num_base_bdevs_discovered": 2, 00:13:59.921 "num_base_bdevs_operational": 2, 00:13:59.921 "process": { 00:13:59.921 "type": "rebuild", 00:13:59.921 "target": "spare", 00:13:59.921 "progress": { 00:13:59.921 "blocks": 20480, 00:13:59.921 "percent": 31 00:13:59.921 } 00:13:59.921 }, 00:13:59.921 "base_bdevs_list": [ 00:13:59.921 { 00:13:59.921 "name": "spare", 00:13:59.921 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:13:59.921 "is_configured": true, 00:13:59.921 "data_offset": 0, 00:13:59.921 "data_size": 65536 00:13:59.921 }, 00:13:59.921 { 00:13:59.921 "name": "BaseBdev2", 00:13:59.921 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:59.921 "is_configured": true, 00:13:59.921 "data_offset": 0, 00:13:59.921 "data_size": 65536 00:13:59.921 } 00:13:59.921 ] 00:13:59.921 }' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.921 "name": "raid_bdev1", 00:13:59.921 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:13:59.921 "strip_size_kb": 0, 00:13:59.921 "state": "online", 00:13:59.921 "raid_level": "raid1", 00:13:59.921 "superblock": false, 00:13:59.921 "num_base_bdevs": 2, 00:13:59.921 "num_base_bdevs_discovered": 2, 00:13:59.921 "num_base_bdevs_operational": 2, 00:13:59.921 "process": { 00:13:59.921 "type": "rebuild", 00:13:59.921 "target": "spare", 00:13:59.921 "progress": { 00:13:59.921 "blocks": 22528, 00:13:59.921 "percent": 34 00:13:59.921 } 00:13:59.921 }, 00:13:59.921 "base_bdevs_list": [ 00:13:59.921 { 00:13:59.921 "name": "spare", 00:13:59.921 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:13:59.921 "is_configured": true, 00:13:59.921 "data_offset": 0, 00:13:59.921 "data_size": 65536 00:13:59.921 }, 00:13:59.921 { 00:13:59.921 "name": "BaseBdev2", 00:13:59.921 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:13:59.921 "is_configured": true, 00:13:59.921 "data_offset": 0, 00:13:59.921 "data_size": 65536 00:13:59.921 } 00:13:59.921 ] 00:13:59.921 }' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.921 09:51:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:00.859 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:00.859 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.859 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.859 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.859 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.859 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.119 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.119 09:51:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.119 09:51:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.119 09:51:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.119 "name": "raid_bdev1", 00:14:01.119 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:14:01.119 "strip_size_kb": 0, 00:14:01.119 "state": "online", 00:14:01.119 "raid_level": "raid1", 00:14:01.119 "superblock": false, 00:14:01.119 "num_base_bdevs": 2, 00:14:01.119 "num_base_bdevs_discovered": 2, 00:14:01.119 "num_base_bdevs_operational": 2, 00:14:01.119 "process": { 00:14:01.119 "type": "rebuild", 00:14:01.119 "target": "spare", 00:14:01.119 "progress": { 00:14:01.119 "blocks": 45056, 00:14:01.119 "percent": 68 00:14:01.119 } 00:14:01.119 }, 00:14:01.119 "base_bdevs_list": [ 00:14:01.119 { 00:14:01.119 "name": "spare", 00:14:01.119 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:14:01.119 "is_configured": true, 00:14:01.119 "data_offset": 0, 00:14:01.119 "data_size": 65536 00:14:01.119 }, 00:14:01.119 { 00:14:01.119 "name": "BaseBdev2", 00:14:01.119 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:14:01.119 "is_configured": true, 00:14:01.119 "data_offset": 0, 00:14:01.119 "data_size": 65536 00:14:01.119 } 00:14:01.119 ] 00:14:01.119 }' 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.119 09:51:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.058 [2024-11-27 09:51:02.978826] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:02.058 [2024-11-27 09:51:02.978927] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:02.058 [2024-11-27 09:51:02.979018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.058 "name": "raid_bdev1", 00:14:02.058 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:14:02.058 "strip_size_kb": 0, 00:14:02.058 "state": "online", 00:14:02.058 "raid_level": "raid1", 00:14:02.058 "superblock": false, 00:14:02.058 "num_base_bdevs": 2, 00:14:02.058 "num_base_bdevs_discovered": 2, 00:14:02.058 "num_base_bdevs_operational": 2, 00:14:02.058 "base_bdevs_list": [ 00:14:02.058 { 00:14:02.058 "name": "spare", 00:14:02.058 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:14:02.058 "is_configured": true, 00:14:02.058 "data_offset": 0, 00:14:02.058 "data_size": 65536 00:14:02.058 }, 00:14:02.058 { 00:14:02.058 "name": "BaseBdev2", 00:14:02.058 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:14:02.058 "is_configured": true, 00:14:02.058 "data_offset": 0, 00:14:02.058 "data_size": 65536 00:14:02.058 } 00:14:02.058 ] 00:14:02.058 }' 00:14:02.058 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.318 "name": "raid_bdev1", 00:14:02.318 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:14:02.318 "strip_size_kb": 0, 00:14:02.318 "state": "online", 00:14:02.318 "raid_level": "raid1", 00:14:02.318 "superblock": false, 00:14:02.318 "num_base_bdevs": 2, 00:14:02.318 "num_base_bdevs_discovered": 2, 00:14:02.318 "num_base_bdevs_operational": 2, 00:14:02.318 "base_bdevs_list": [ 00:14:02.318 { 00:14:02.318 "name": "spare", 00:14:02.318 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:14:02.318 "is_configured": true, 00:14:02.318 "data_offset": 0, 00:14:02.318 "data_size": 65536 00:14:02.318 }, 00:14:02.318 { 00:14:02.318 "name": "BaseBdev2", 00:14:02.318 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:14:02.318 "is_configured": true, 00:14:02.318 "data_offset": 0, 00:14:02.318 "data_size": 65536 00:14:02.318 } 00:14:02.318 ] 00:14:02.318 }' 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.318 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.319 "name": "raid_bdev1", 00:14:02.319 "uuid": "cacacdfd-5354-4888-90bd-adf213a775cb", 00:14:02.319 "strip_size_kb": 0, 00:14:02.319 "state": "online", 00:14:02.319 "raid_level": "raid1", 00:14:02.319 "superblock": false, 00:14:02.319 "num_base_bdevs": 2, 00:14:02.319 "num_base_bdevs_discovered": 2, 00:14:02.319 "num_base_bdevs_operational": 2, 00:14:02.319 "base_bdevs_list": [ 00:14:02.319 { 00:14:02.319 "name": "spare", 00:14:02.319 "uuid": "7b84a1d1-2c29-5d8e-8b60-fdcc497a8e66", 00:14:02.319 "is_configured": true, 00:14:02.319 "data_offset": 0, 00:14:02.319 "data_size": 65536 00:14:02.319 }, 00:14:02.319 { 00:14:02.319 "name": "BaseBdev2", 00:14:02.319 "uuid": "5ccfca1b-c2fd-5e07-b2e8-dc1324a8d8b6", 00:14:02.319 "is_configured": true, 00:14:02.319 "data_offset": 0, 00:14:02.319 "data_size": 65536 00:14:02.319 } 00:14:02.319 ] 00:14:02.319 }' 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.319 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.888 [2024-11-27 09:51:03.861245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.888 [2024-11-27 09:51:03.861358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.888 [2024-11-27 09:51:03.861510] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.888 [2024-11-27 09:51:03.861619] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.888 [2024-11-27 09:51:03.861684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.888 09:51:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:03.147 /dev/nbd0 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.147 1+0 records in 00:14:03.147 1+0 records out 00:14:03.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457708 s, 8.9 MB/s 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.147 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:03.406 /dev/nbd1 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:03.406 1+0 records in 00:14:03.406 1+0 records out 00:14:03.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410487 s, 10.0 MB/s 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:03.406 09:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.665 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.924 09:51:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75575 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75575 ']' 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75575 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.183 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75575 00:14:04.184 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.184 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.184 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75575' 00:14:04.184 killing process with pid 75575 00:14:04.184 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75575 00:14:04.184 Received shutdown signal, test time was about 60.000000 seconds 00:14:04.184 00:14:04.184 Latency(us) 00:14:04.184 [2024-11-27T09:51:05.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.184 [2024-11-27T09:51:05.317Z] =================================================================================================================== 00:14:04.184 [2024-11-27T09:51:05.317Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:04.184 [2024-11-27 09:51:05.132356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.184 09:51:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75575 00:14:04.442 [2024-11-27 09:51:05.464465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:05.820 00:14:05.820 real 0m15.846s 00:14:05.820 user 0m17.825s 00:14:05.820 sys 0m3.057s 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.820 ************************************ 00:14:05.820 END TEST raid_rebuild_test 00:14:05.820 ************************************ 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.820 09:51:06 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:05.820 09:51:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:05.820 09:51:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.820 09:51:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:05.820 ************************************ 00:14:05.820 START TEST raid_rebuild_test_sb 00:14:05.820 ************************************ 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:05.820 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75993 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75993 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75993 ']' 00:14:05.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.821 09:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.821 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:05.821 Zero copy mechanism will not be used. 00:14:05.821 [2024-11-27 09:51:06.892856] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:14:05.821 [2024-11-27 09:51:06.893035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75993 ] 00:14:06.080 [2024-11-27 09:51:07.069728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.080 [2024-11-27 09:51:07.207795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.338 [2024-11-27 09:51:07.446421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.338 [2024-11-27 09:51:07.446483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.598 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.598 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:06.598 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.598 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:06.598 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.598 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 BaseBdev1_malloc 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 [2024-11-27 09:51:07.781342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:06.857 [2024-11-27 09:51:07.781500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.857 [2024-11-27 09:51:07.781550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:06.857 [2024-11-27 09:51:07.781609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.857 [2024-11-27 09:51:07.784208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.857 [2024-11-27 09:51:07.784316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:06.857 BaseBdev1 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 BaseBdev2_malloc 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 [2024-11-27 09:51:07.840470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:06.857 [2024-11-27 09:51:07.840640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.857 [2024-11-27 09:51:07.840693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:06.857 [2024-11-27 09:51:07.840738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.857 [2024-11-27 09:51:07.843344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.857 [2024-11-27 09:51:07.843436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:06.857 BaseBdev2 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 spare_malloc 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 spare_delay 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.857 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.857 [2024-11-27 09:51:07.925779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:06.857 [2024-11-27 09:51:07.925948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:06.858 [2024-11-27 09:51:07.926007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:06.858 [2024-11-27 09:51:07.926072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:06.858 [2024-11-27 09:51:07.928695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:06.858 [2024-11-27 09:51:07.928802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:06.858 spare 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.858 [2024-11-27 09:51:07.937846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:06.858 [2024-11-27 09:51:07.940076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:06.858 [2024-11-27 09:51:07.940341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:06.858 [2024-11-27 09:51:07.940403] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.858 [2024-11-27 09:51:07.940721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:06.858 [2024-11-27 09:51:07.940973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:06.858 [2024-11-27 09:51:07.941035] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:06.858 [2024-11-27 09:51:07.941295] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.858 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.117 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.117 "name": "raid_bdev1", 00:14:07.117 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:07.117 "strip_size_kb": 0, 00:14:07.117 "state": "online", 00:14:07.117 "raid_level": "raid1", 00:14:07.117 "superblock": true, 00:14:07.117 "num_base_bdevs": 2, 00:14:07.117 "num_base_bdevs_discovered": 2, 00:14:07.117 "num_base_bdevs_operational": 2, 00:14:07.117 "base_bdevs_list": [ 00:14:07.117 { 00:14:07.117 "name": "BaseBdev1", 00:14:07.117 "uuid": "96bf94be-f831-585e-9eb4-5977304e650d", 00:14:07.117 "is_configured": true, 00:14:07.117 "data_offset": 2048, 00:14:07.117 "data_size": 63488 00:14:07.117 }, 00:14:07.117 { 00:14:07.117 "name": "BaseBdev2", 00:14:07.117 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:07.117 "is_configured": true, 00:14:07.117 "data_offset": 2048, 00:14:07.117 "data_size": 63488 00:14:07.117 } 00:14:07.117 ] 00:14:07.117 }' 00:14:07.117 09:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.117 09:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.376 [2024-11-27 09:51:08.409538] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.376 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:07.636 [2024-11-27 09:51:08.696764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:07.636 /dev/nbd0 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.636 1+0 records in 00:14:07.636 1+0 records out 00:14:07.636 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726118 s, 5.6 MB/s 00:14:07.636 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:07.895 09:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:12.087 63488+0 records in 00:14:12.087 63488+0 records out 00:14:12.087 32505856 bytes (33 MB, 31 MiB) copied, 4.30454 s, 7.6 MB/s 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.087 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.347 [2024-11-27 09:51:13.307260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.347 [2024-11-27 09:51:13.323398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.347 "name": "raid_bdev1", 00:14:12.347 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:12.347 "strip_size_kb": 0, 00:14:12.347 "state": "online", 00:14:12.347 "raid_level": "raid1", 00:14:12.347 "superblock": true, 00:14:12.347 "num_base_bdevs": 2, 00:14:12.347 "num_base_bdevs_discovered": 1, 00:14:12.347 "num_base_bdevs_operational": 1, 00:14:12.347 "base_bdevs_list": [ 00:14:12.347 { 00:14:12.347 "name": null, 00:14:12.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.347 "is_configured": false, 00:14:12.347 "data_offset": 0, 00:14:12.347 "data_size": 63488 00:14:12.347 }, 00:14:12.347 { 00:14:12.347 "name": "BaseBdev2", 00:14:12.347 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:12.347 "is_configured": true, 00:14:12.347 "data_offset": 2048, 00:14:12.347 "data_size": 63488 00:14:12.347 } 00:14:12.347 ] 00:14:12.347 }' 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.347 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.917 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.917 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.917 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.917 [2024-11-27 09:51:13.834577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.917 [2024-11-27 09:51:13.853672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:12.917 09:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.917 [2024-11-27 09:51:13.856083] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.917 09:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.858 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.858 "name": "raid_bdev1", 00:14:13.858 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:13.858 "strip_size_kb": 0, 00:14:13.858 "state": "online", 00:14:13.858 "raid_level": "raid1", 00:14:13.858 "superblock": true, 00:14:13.858 "num_base_bdevs": 2, 00:14:13.858 "num_base_bdevs_discovered": 2, 00:14:13.858 "num_base_bdevs_operational": 2, 00:14:13.858 "process": { 00:14:13.858 "type": "rebuild", 00:14:13.858 "target": "spare", 00:14:13.858 "progress": { 00:14:13.858 "blocks": 20480, 00:14:13.858 "percent": 32 00:14:13.858 } 00:14:13.858 }, 00:14:13.858 "base_bdevs_list": [ 00:14:13.858 { 00:14:13.858 "name": "spare", 00:14:13.858 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:13.858 "is_configured": true, 00:14:13.858 "data_offset": 2048, 00:14:13.858 "data_size": 63488 00:14:13.858 }, 00:14:13.859 { 00:14:13.859 "name": "BaseBdev2", 00:14:13.859 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:13.859 "is_configured": true, 00:14:13.859 "data_offset": 2048, 00:14:13.859 "data_size": 63488 00:14:13.859 } 00:14:13.859 ] 00:14:13.859 }' 00:14:13.859 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.859 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.859 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.119 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.119 09:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.119 09:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.119 09:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.119 [2024-11-27 09:51:15.004613] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.119 [2024-11-27 09:51:15.067600] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.119 [2024-11-27 09:51:15.067707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.119 [2024-11-27 09:51:15.067728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.119 [2024-11-27 09:51:15.067741] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.119 "name": "raid_bdev1", 00:14:14.119 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:14.119 "strip_size_kb": 0, 00:14:14.119 "state": "online", 00:14:14.119 "raid_level": "raid1", 00:14:14.119 "superblock": true, 00:14:14.119 "num_base_bdevs": 2, 00:14:14.119 "num_base_bdevs_discovered": 1, 00:14:14.119 "num_base_bdevs_operational": 1, 00:14:14.119 "base_bdevs_list": [ 00:14:14.119 { 00:14:14.119 "name": null, 00:14:14.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.119 "is_configured": false, 00:14:14.119 "data_offset": 0, 00:14:14.119 "data_size": 63488 00:14:14.119 }, 00:14:14.119 { 00:14:14.119 "name": "BaseBdev2", 00:14:14.119 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:14.119 "is_configured": true, 00:14:14.119 "data_offset": 2048, 00:14:14.119 "data_size": 63488 00:14:14.119 } 00:14:14.119 ] 00:14:14.119 }' 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.119 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.689 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.689 "name": "raid_bdev1", 00:14:14.689 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:14.689 "strip_size_kb": 0, 00:14:14.689 "state": "online", 00:14:14.690 "raid_level": "raid1", 00:14:14.690 "superblock": true, 00:14:14.690 "num_base_bdevs": 2, 00:14:14.690 "num_base_bdevs_discovered": 1, 00:14:14.690 "num_base_bdevs_operational": 1, 00:14:14.690 "base_bdevs_list": [ 00:14:14.690 { 00:14:14.690 "name": null, 00:14:14.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.690 "is_configured": false, 00:14:14.690 "data_offset": 0, 00:14:14.690 "data_size": 63488 00:14:14.690 }, 00:14:14.690 { 00:14:14.690 "name": "BaseBdev2", 00:14:14.690 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:14.690 "is_configured": true, 00:14:14.690 "data_offset": 2048, 00:14:14.690 "data_size": 63488 00:14:14.690 } 00:14:14.690 ] 00:14:14.690 }' 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.690 [2024-11-27 09:51:15.666862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.690 [2024-11-27 09:51:15.685067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.690 09:51:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:14.690 [2024-11-27 09:51:15.687387] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.631 "name": "raid_bdev1", 00:14:15.631 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:15.631 "strip_size_kb": 0, 00:14:15.631 "state": "online", 00:14:15.631 "raid_level": "raid1", 00:14:15.631 "superblock": true, 00:14:15.631 "num_base_bdevs": 2, 00:14:15.631 "num_base_bdevs_discovered": 2, 00:14:15.631 "num_base_bdevs_operational": 2, 00:14:15.631 "process": { 00:14:15.631 "type": "rebuild", 00:14:15.631 "target": "spare", 00:14:15.631 "progress": { 00:14:15.631 "blocks": 20480, 00:14:15.631 "percent": 32 00:14:15.631 } 00:14:15.631 }, 00:14:15.631 "base_bdevs_list": [ 00:14:15.631 { 00:14:15.631 "name": "spare", 00:14:15.631 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:15.631 "is_configured": true, 00:14:15.631 "data_offset": 2048, 00:14:15.631 "data_size": 63488 00:14:15.631 }, 00:14:15.631 { 00:14:15.631 "name": "BaseBdev2", 00:14:15.631 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:15.631 "is_configured": true, 00:14:15.631 "data_offset": 2048, 00:14:15.631 "data_size": 63488 00:14:15.631 } 00:14:15.631 ] 00:14:15.631 }' 00:14:15.631 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:15.892 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=391 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.892 "name": "raid_bdev1", 00:14:15.892 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:15.892 "strip_size_kb": 0, 00:14:15.892 "state": "online", 00:14:15.892 "raid_level": "raid1", 00:14:15.892 "superblock": true, 00:14:15.892 "num_base_bdevs": 2, 00:14:15.892 "num_base_bdevs_discovered": 2, 00:14:15.892 "num_base_bdevs_operational": 2, 00:14:15.892 "process": { 00:14:15.892 "type": "rebuild", 00:14:15.892 "target": "spare", 00:14:15.892 "progress": { 00:14:15.892 "blocks": 22528, 00:14:15.892 "percent": 35 00:14:15.892 } 00:14:15.892 }, 00:14:15.892 "base_bdevs_list": [ 00:14:15.892 { 00:14:15.892 "name": "spare", 00:14:15.892 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:15.892 "is_configured": true, 00:14:15.892 "data_offset": 2048, 00:14:15.892 "data_size": 63488 00:14:15.892 }, 00:14:15.892 { 00:14:15.892 "name": "BaseBdev2", 00:14:15.892 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:15.892 "is_configured": true, 00:14:15.892 "data_offset": 2048, 00:14:15.892 "data_size": 63488 00:14:15.892 } 00:14:15.892 ] 00:14:15.892 }' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.892 09:51:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.274 09:51:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.274 "name": "raid_bdev1", 00:14:17.274 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:17.274 "strip_size_kb": 0, 00:14:17.274 "state": "online", 00:14:17.274 "raid_level": "raid1", 00:14:17.274 "superblock": true, 00:14:17.274 "num_base_bdevs": 2, 00:14:17.274 "num_base_bdevs_discovered": 2, 00:14:17.274 "num_base_bdevs_operational": 2, 00:14:17.274 "process": { 00:14:17.274 "type": "rebuild", 00:14:17.274 "target": "spare", 00:14:17.274 "progress": { 00:14:17.274 "blocks": 45056, 00:14:17.274 "percent": 70 00:14:17.274 } 00:14:17.274 }, 00:14:17.274 "base_bdevs_list": [ 00:14:17.274 { 00:14:17.274 "name": "spare", 00:14:17.274 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:17.274 "is_configured": true, 00:14:17.274 "data_offset": 2048, 00:14:17.274 "data_size": 63488 00:14:17.274 }, 00:14:17.274 { 00:14:17.274 "name": "BaseBdev2", 00:14:17.274 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:17.274 "is_configured": true, 00:14:17.274 "data_offset": 2048, 00:14:17.274 "data_size": 63488 00:14:17.274 } 00:14:17.274 ] 00:14:17.274 }' 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.274 09:51:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.845 [2024-11-27 09:51:18.813679] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:17.845 [2024-11-27 09:51:18.813863] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:17.845 [2024-11-27 09:51:18.814060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.105 "name": "raid_bdev1", 00:14:18.105 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:18.105 "strip_size_kb": 0, 00:14:18.105 "state": "online", 00:14:18.105 "raid_level": "raid1", 00:14:18.105 "superblock": true, 00:14:18.105 "num_base_bdevs": 2, 00:14:18.105 "num_base_bdevs_discovered": 2, 00:14:18.105 "num_base_bdevs_operational": 2, 00:14:18.105 "base_bdevs_list": [ 00:14:18.105 { 00:14:18.105 "name": "spare", 00:14:18.105 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:18.105 "is_configured": true, 00:14:18.105 "data_offset": 2048, 00:14:18.105 "data_size": 63488 00:14:18.105 }, 00:14:18.105 { 00:14:18.105 "name": "BaseBdev2", 00:14:18.105 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:18.105 "is_configured": true, 00:14:18.105 "data_offset": 2048, 00:14:18.105 "data_size": 63488 00:14:18.105 } 00:14:18.105 ] 00:14:18.105 }' 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:18.105 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.365 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.366 "name": "raid_bdev1", 00:14:18.366 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:18.366 "strip_size_kb": 0, 00:14:18.366 "state": "online", 00:14:18.366 "raid_level": "raid1", 00:14:18.366 "superblock": true, 00:14:18.366 "num_base_bdevs": 2, 00:14:18.366 "num_base_bdevs_discovered": 2, 00:14:18.366 "num_base_bdevs_operational": 2, 00:14:18.366 "base_bdevs_list": [ 00:14:18.366 { 00:14:18.366 "name": "spare", 00:14:18.366 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:18.366 "is_configured": true, 00:14:18.366 "data_offset": 2048, 00:14:18.366 "data_size": 63488 00:14:18.366 }, 00:14:18.366 { 00:14:18.366 "name": "BaseBdev2", 00:14:18.366 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:18.366 "is_configured": true, 00:14:18.366 "data_offset": 2048, 00:14:18.366 "data_size": 63488 00:14:18.366 } 00:14:18.366 ] 00:14:18.366 }' 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.366 "name": "raid_bdev1", 00:14:18.366 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:18.366 "strip_size_kb": 0, 00:14:18.366 "state": "online", 00:14:18.366 "raid_level": "raid1", 00:14:18.366 "superblock": true, 00:14:18.366 "num_base_bdevs": 2, 00:14:18.366 "num_base_bdevs_discovered": 2, 00:14:18.366 "num_base_bdevs_operational": 2, 00:14:18.366 "base_bdevs_list": [ 00:14:18.366 { 00:14:18.366 "name": "spare", 00:14:18.366 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:18.366 "is_configured": true, 00:14:18.366 "data_offset": 2048, 00:14:18.366 "data_size": 63488 00:14:18.366 }, 00:14:18.366 { 00:14:18.366 "name": "BaseBdev2", 00:14:18.366 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:18.366 "is_configured": true, 00:14:18.366 "data_offset": 2048, 00:14:18.366 "data_size": 63488 00:14:18.366 } 00:14:18.366 ] 00:14:18.366 }' 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.366 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.936 [2024-11-27 09:51:19.860633] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:18.936 [2024-11-27 09:51:19.860775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.936 [2024-11-27 09:51:19.860933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.936 [2024-11-27 09:51:19.861075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.936 [2024-11-27 09:51:19.861137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:18.936 09:51:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:19.197 /dev/nbd0 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.197 1+0 records in 00:14:19.197 1+0 records out 00:14:19.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556385 s, 7.4 MB/s 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.197 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:19.457 /dev/nbd1 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:19.457 1+0 records in 00:14:19.457 1+0 records out 00:14:19.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605785 s, 6.8 MB/s 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:19.457 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:19.727 09:51:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.987 [2024-11-27 09:51:21.070809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.987 [2024-11-27 09:51:21.070905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.987 [2024-11-27 09:51:21.070940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:19.987 [2024-11-27 09:51:21.070952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.987 [2024-11-27 09:51:21.073643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.987 spare 00:14:19.987 [2024-11-27 09:51:21.073745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.987 [2024-11-27 09:51:21.073883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:19.987 [2024-11-27 09:51:21.073948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.987 [2024-11-27 09:51:21.074149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.987 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.246 [2024-11-27 09:51:21.174086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:20.246 [2024-11-27 09:51:21.174312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:20.247 [2024-11-27 09:51:21.174808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:20.247 [2024-11-27 09:51:21.175164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:20.247 [2024-11-27 09:51:21.175229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:20.247 [2024-11-27 09:51:21.175558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.247 "name": "raid_bdev1", 00:14:20.247 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:20.247 "strip_size_kb": 0, 00:14:20.247 "state": "online", 00:14:20.247 "raid_level": "raid1", 00:14:20.247 "superblock": true, 00:14:20.247 "num_base_bdevs": 2, 00:14:20.247 "num_base_bdevs_discovered": 2, 00:14:20.247 "num_base_bdevs_operational": 2, 00:14:20.247 "base_bdevs_list": [ 00:14:20.247 { 00:14:20.247 "name": "spare", 00:14:20.247 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:20.247 "is_configured": true, 00:14:20.247 "data_offset": 2048, 00:14:20.247 "data_size": 63488 00:14:20.247 }, 00:14:20.247 { 00:14:20.247 "name": "BaseBdev2", 00:14:20.247 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:20.247 "is_configured": true, 00:14:20.247 "data_offset": 2048, 00:14:20.247 "data_size": 63488 00:14:20.247 } 00:14:20.247 ] 00:14:20.247 }' 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.247 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.507 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.507 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.507 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.507 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.507 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.767 "name": "raid_bdev1", 00:14:20.767 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:20.767 "strip_size_kb": 0, 00:14:20.767 "state": "online", 00:14:20.767 "raid_level": "raid1", 00:14:20.767 "superblock": true, 00:14:20.767 "num_base_bdevs": 2, 00:14:20.767 "num_base_bdevs_discovered": 2, 00:14:20.767 "num_base_bdevs_operational": 2, 00:14:20.767 "base_bdevs_list": [ 00:14:20.767 { 00:14:20.767 "name": "spare", 00:14:20.767 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:20.767 "is_configured": true, 00:14:20.767 "data_offset": 2048, 00:14:20.767 "data_size": 63488 00:14:20.767 }, 00:14:20.767 { 00:14:20.767 "name": "BaseBdev2", 00:14:20.767 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:20.767 "is_configured": true, 00:14:20.767 "data_offset": 2048, 00:14:20.767 "data_size": 63488 00:14:20.767 } 00:14:20.767 ] 00:14:20.767 }' 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.767 [2024-11-27 09:51:21.842359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.767 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.028 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.028 "name": "raid_bdev1", 00:14:21.028 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:21.028 "strip_size_kb": 0, 00:14:21.028 "state": "online", 00:14:21.028 "raid_level": "raid1", 00:14:21.028 "superblock": true, 00:14:21.028 "num_base_bdevs": 2, 00:14:21.028 "num_base_bdevs_discovered": 1, 00:14:21.028 "num_base_bdevs_operational": 1, 00:14:21.028 "base_bdevs_list": [ 00:14:21.028 { 00:14:21.028 "name": null, 00:14:21.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.028 "is_configured": false, 00:14:21.028 "data_offset": 0, 00:14:21.028 "data_size": 63488 00:14:21.028 }, 00:14:21.028 { 00:14:21.028 "name": "BaseBdev2", 00:14:21.028 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:21.028 "is_configured": true, 00:14:21.028 "data_offset": 2048, 00:14:21.028 "data_size": 63488 00:14:21.028 } 00:14:21.028 ] 00:14:21.028 }' 00:14:21.028 09:51:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.028 09:51:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.316 09:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.316 09:51:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.316 09:51:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.316 [2024-11-27 09:51:22.237770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.316 [2024-11-27 09:51:22.238139] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:21.316 [2024-11-27 09:51:22.238225] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:21.316 [2024-11-27 09:51:22.238340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.316 [2024-11-27 09:51:22.256049] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:21.316 09:51:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.316 09:51:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:21.316 [2024-11-27 09:51:22.258367] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.256 "name": "raid_bdev1", 00:14:22.256 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:22.256 "strip_size_kb": 0, 00:14:22.256 "state": "online", 00:14:22.256 "raid_level": "raid1", 00:14:22.256 "superblock": true, 00:14:22.256 "num_base_bdevs": 2, 00:14:22.256 "num_base_bdevs_discovered": 2, 00:14:22.256 "num_base_bdevs_operational": 2, 00:14:22.256 "process": { 00:14:22.256 "type": "rebuild", 00:14:22.256 "target": "spare", 00:14:22.256 "progress": { 00:14:22.256 "blocks": 20480, 00:14:22.256 "percent": 32 00:14:22.256 } 00:14:22.256 }, 00:14:22.256 "base_bdevs_list": [ 00:14:22.256 { 00:14:22.256 "name": "spare", 00:14:22.256 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:22.256 "is_configured": true, 00:14:22.256 "data_offset": 2048, 00:14:22.256 "data_size": 63488 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "name": "BaseBdev2", 00:14:22.256 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:22.256 "is_configured": true, 00:14:22.256 "data_offset": 2048, 00:14:22.256 "data_size": 63488 00:14:22.256 } 00:14:22.256 ] 00:14:22.256 }' 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.256 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 [2024-11-27 09:51:23.418223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.516 [2024-11-27 09:51:23.467784] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:22.516 [2024-11-27 09:51:23.467922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.516 [2024-11-27 09:51:23.467988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:22.516 [2024-11-27 09:51:23.468039] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.516 "name": "raid_bdev1", 00:14:22.516 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:22.516 "strip_size_kb": 0, 00:14:22.516 "state": "online", 00:14:22.516 "raid_level": "raid1", 00:14:22.516 "superblock": true, 00:14:22.516 "num_base_bdevs": 2, 00:14:22.516 "num_base_bdevs_discovered": 1, 00:14:22.516 "num_base_bdevs_operational": 1, 00:14:22.516 "base_bdevs_list": [ 00:14:22.516 { 00:14:22.516 "name": null, 00:14:22.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.516 "is_configured": false, 00:14:22.516 "data_offset": 0, 00:14:22.516 "data_size": 63488 00:14:22.516 }, 00:14:22.516 { 00:14:22.516 "name": "BaseBdev2", 00:14:22.516 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:22.516 "is_configured": true, 00:14:22.516 "data_offset": 2048, 00:14:22.516 "data_size": 63488 00:14:22.516 } 00:14:22.516 ] 00:14:22.516 }' 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.516 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.084 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:23.084 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.084 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.084 [2024-11-27 09:51:23.968974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.084 [2024-11-27 09:51:23.969148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.084 [2024-11-27 09:51:23.969202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:23.084 [2024-11-27 09:51:23.969259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.084 [2024-11-27 09:51:23.969907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.084 [2024-11-27 09:51:23.969989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.084 [2024-11-27 09:51:23.970176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:23.085 [2024-11-27 09:51:23.970231] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:23.085 [2024-11-27 09:51:23.970288] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:23.085 [2024-11-27 09:51:23.970359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.085 [2024-11-27 09:51:23.988681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:23.085 spare 00:14:23.085 09:51:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.085 09:51:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:23.085 [2024-11-27 09:51:23.991071] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.021 09:51:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.021 "name": "raid_bdev1", 00:14:24.021 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:24.021 "strip_size_kb": 0, 00:14:24.021 "state": "online", 00:14:24.021 "raid_level": "raid1", 00:14:24.021 "superblock": true, 00:14:24.021 "num_base_bdevs": 2, 00:14:24.021 "num_base_bdevs_discovered": 2, 00:14:24.021 "num_base_bdevs_operational": 2, 00:14:24.021 "process": { 00:14:24.021 "type": "rebuild", 00:14:24.021 "target": "spare", 00:14:24.021 "progress": { 00:14:24.021 "blocks": 20480, 00:14:24.021 "percent": 32 00:14:24.021 } 00:14:24.021 }, 00:14:24.021 "base_bdevs_list": [ 00:14:24.021 { 00:14:24.021 "name": "spare", 00:14:24.021 "uuid": "21513aad-4b0f-5ee2-9d98-4129e7ce15eb", 00:14:24.021 "is_configured": true, 00:14:24.021 "data_offset": 2048, 00:14:24.021 "data_size": 63488 00:14:24.021 }, 00:14:24.021 { 00:14:24.021 "name": "BaseBdev2", 00:14:24.021 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:24.021 "is_configured": true, 00:14:24.021 "data_offset": 2048, 00:14:24.021 "data_size": 63488 00:14:24.021 } 00:14:24.021 ] 00:14:24.021 }' 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.021 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.021 [2024-11-27 09:51:25.139041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.281 [2024-11-27 09:51:25.201977] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.281 [2024-11-27 09:51:25.202174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.281 [2024-11-27 09:51:25.202201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.281 [2024-11-27 09:51:25.202212] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.281 "name": "raid_bdev1", 00:14:24.281 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:24.281 "strip_size_kb": 0, 00:14:24.281 "state": "online", 00:14:24.281 "raid_level": "raid1", 00:14:24.281 "superblock": true, 00:14:24.281 "num_base_bdevs": 2, 00:14:24.281 "num_base_bdevs_discovered": 1, 00:14:24.281 "num_base_bdevs_operational": 1, 00:14:24.281 "base_bdevs_list": [ 00:14:24.281 { 00:14:24.281 "name": null, 00:14:24.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.281 "is_configured": false, 00:14:24.281 "data_offset": 0, 00:14:24.281 "data_size": 63488 00:14:24.281 }, 00:14:24.281 { 00:14:24.281 "name": "BaseBdev2", 00:14:24.281 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:24.281 "is_configured": true, 00:14:24.281 "data_offset": 2048, 00:14:24.281 "data_size": 63488 00:14:24.281 } 00:14:24.281 ] 00:14:24.281 }' 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.281 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.851 "name": "raid_bdev1", 00:14:24.851 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:24.851 "strip_size_kb": 0, 00:14:24.851 "state": "online", 00:14:24.851 "raid_level": "raid1", 00:14:24.851 "superblock": true, 00:14:24.851 "num_base_bdevs": 2, 00:14:24.851 "num_base_bdevs_discovered": 1, 00:14:24.851 "num_base_bdevs_operational": 1, 00:14:24.851 "base_bdevs_list": [ 00:14:24.851 { 00:14:24.851 "name": null, 00:14:24.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.851 "is_configured": false, 00:14:24.851 "data_offset": 0, 00:14:24.851 "data_size": 63488 00:14:24.851 }, 00:14:24.851 { 00:14:24.851 "name": "BaseBdev2", 00:14:24.851 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:24.851 "is_configured": true, 00:14:24.851 "data_offset": 2048, 00:14:24.851 "data_size": 63488 00:14:24.851 } 00:14:24.851 ] 00:14:24.851 }' 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.851 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.852 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.852 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:24.852 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.852 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.852 [2024-11-27 09:51:25.864419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:24.852 [2024-11-27 09:51:25.864560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.852 [2024-11-27 09:51:25.864616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:24.852 [2024-11-27 09:51:25.864671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.852 [2024-11-27 09:51:25.865267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.852 [2024-11-27 09:51:25.865340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:24.852 [2024-11-27 09:51:25.865475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:24.852 [2024-11-27 09:51:25.865523] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:24.852 [2024-11-27 09:51:25.865542] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:24.852 [2024-11-27 09:51:25.865556] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:24.852 BaseBdev1 00:14:24.852 09:51:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.852 09:51:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.789 09:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.048 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.048 "name": "raid_bdev1", 00:14:26.048 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:26.048 "strip_size_kb": 0, 00:14:26.048 "state": "online", 00:14:26.048 "raid_level": "raid1", 00:14:26.048 "superblock": true, 00:14:26.048 "num_base_bdevs": 2, 00:14:26.048 "num_base_bdevs_discovered": 1, 00:14:26.048 "num_base_bdevs_operational": 1, 00:14:26.048 "base_bdevs_list": [ 00:14:26.048 { 00:14:26.048 "name": null, 00:14:26.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.048 "is_configured": false, 00:14:26.048 "data_offset": 0, 00:14:26.048 "data_size": 63488 00:14:26.048 }, 00:14:26.048 { 00:14:26.048 "name": "BaseBdev2", 00:14:26.048 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:26.048 "is_configured": true, 00:14:26.048 "data_offset": 2048, 00:14:26.048 "data_size": 63488 00:14:26.048 } 00:14:26.048 ] 00:14:26.048 }' 00:14:26.048 09:51:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.048 09:51:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.384 "name": "raid_bdev1", 00:14:26.384 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:26.384 "strip_size_kb": 0, 00:14:26.384 "state": "online", 00:14:26.384 "raid_level": "raid1", 00:14:26.384 "superblock": true, 00:14:26.384 "num_base_bdevs": 2, 00:14:26.384 "num_base_bdevs_discovered": 1, 00:14:26.384 "num_base_bdevs_operational": 1, 00:14:26.384 "base_bdevs_list": [ 00:14:26.384 { 00:14:26.384 "name": null, 00:14:26.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.384 "is_configured": false, 00:14:26.384 "data_offset": 0, 00:14:26.384 "data_size": 63488 00:14:26.384 }, 00:14:26.384 { 00:14:26.384 "name": "BaseBdev2", 00:14:26.384 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:26.384 "is_configured": true, 00:14:26.384 "data_offset": 2048, 00:14:26.384 "data_size": 63488 00:14:26.384 } 00:14:26.384 ] 00:14:26.384 }' 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.384 [2024-11-27 09:51:27.449811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.384 [2024-11-27 09:51:27.450140] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:26.384 [2024-11-27 09:51:27.450235] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:26.384 request: 00:14:26.384 { 00:14:26.384 "base_bdev": "BaseBdev1", 00:14:26.384 "raid_bdev": "raid_bdev1", 00:14:26.384 "method": "bdev_raid_add_base_bdev", 00:14:26.384 "req_id": 1 00:14:26.384 } 00:14:26.384 Got JSON-RPC error response 00:14:26.384 response: 00:14:26.384 { 00:14:26.384 "code": -22, 00:14:26.384 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:26.384 } 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:26.384 09:51:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.347 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.606 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.606 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.606 "name": "raid_bdev1", 00:14:27.606 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:27.606 "strip_size_kb": 0, 00:14:27.606 "state": "online", 00:14:27.606 "raid_level": "raid1", 00:14:27.606 "superblock": true, 00:14:27.606 "num_base_bdevs": 2, 00:14:27.606 "num_base_bdevs_discovered": 1, 00:14:27.606 "num_base_bdevs_operational": 1, 00:14:27.606 "base_bdevs_list": [ 00:14:27.606 { 00:14:27.606 "name": null, 00:14:27.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.606 "is_configured": false, 00:14:27.606 "data_offset": 0, 00:14:27.606 "data_size": 63488 00:14:27.606 }, 00:14:27.606 { 00:14:27.606 "name": "BaseBdev2", 00:14:27.606 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:27.606 "is_configured": true, 00:14:27.606 "data_offset": 2048, 00:14:27.606 "data_size": 63488 00:14:27.606 } 00:14:27.606 ] 00:14:27.606 }' 00:14:27.606 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.606 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.866 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.866 "name": "raid_bdev1", 00:14:27.866 "uuid": "f8e9df59-7b69-4b29-9961-5ea350f2cb55", 00:14:27.867 "strip_size_kb": 0, 00:14:27.867 "state": "online", 00:14:27.867 "raid_level": "raid1", 00:14:27.867 "superblock": true, 00:14:27.867 "num_base_bdevs": 2, 00:14:27.867 "num_base_bdevs_discovered": 1, 00:14:27.867 "num_base_bdevs_operational": 1, 00:14:27.867 "base_bdevs_list": [ 00:14:27.867 { 00:14:27.867 "name": null, 00:14:27.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.867 "is_configured": false, 00:14:27.867 "data_offset": 0, 00:14:27.867 "data_size": 63488 00:14:27.867 }, 00:14:27.867 { 00:14:27.867 "name": "BaseBdev2", 00:14:27.867 "uuid": "a3915f24-cc74-5e25-9cab-cfc2e91e053c", 00:14:27.867 "is_configured": true, 00:14:27.867 "data_offset": 2048, 00:14:27.867 "data_size": 63488 00:14:27.867 } 00:14:27.867 ] 00:14:27.867 }' 00:14:27.867 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.867 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.867 09:51:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75993 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75993 ']' 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75993 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75993 00:14:28.126 killing process with pid 75993 00:14:28.126 Received shutdown signal, test time was about 60.000000 seconds 00:14:28.126 00:14:28.126 Latency(us) 00:14:28.126 [2024-11-27T09:51:29.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.126 [2024-11-27T09:51:29.259Z] =================================================================================================================== 00:14:28.126 [2024-11-27T09:51:29.259Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75993' 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75993 00:14:28.126 [2024-11-27 09:51:29.069729] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:28.126 09:51:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75993 00:14:28.126 [2024-11-27 09:51:29.069901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.126 [2024-11-27 09:51:29.069968] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.126 [2024-11-27 09:51:29.069984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:28.384 [2024-11-27 09:51:29.397197] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:29.765 00:14:29.765 real 0m23.828s 00:14:29.765 user 0m28.858s 00:14:29.765 sys 0m3.848s 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.765 ************************************ 00:14:29.765 END TEST raid_rebuild_test_sb 00:14:29.765 ************************************ 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.765 09:51:30 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:29.765 09:51:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:29.765 09:51:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.765 09:51:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.765 ************************************ 00:14:29.765 START TEST raid_rebuild_test_io 00:14:29.765 ************************************ 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76731 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76731 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76731 ']' 00:14:29.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:29.765 09:51:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:29.765 [2024-11-27 09:51:30.792071] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:14:29.765 [2024-11-27 09:51:30.792325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76731 ] 00:14:29.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:29.765 Zero copy mechanism will not be used. 00:14:30.025 [2024-11-27 09:51:30.973500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.025 [2024-11-27 09:51:31.115565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.285 [2024-11-27 09:51:31.348111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.285 [2024-11-27 09:51:31.348314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.544 BaseBdev1_malloc 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.544 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 [2024-11-27 09:51:31.677714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:30.804 [2024-11-27 09:51:31.677874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.804 [2024-11-27 09:51:31.677925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:30.804 [2024-11-27 09:51:31.677969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.804 [2024-11-27 09:51:31.680563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.804 [2024-11-27 09:51:31.680661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:30.804 BaseBdev1 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 BaseBdev2_malloc 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 [2024-11-27 09:51:31.739669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:30.804 [2024-11-27 09:51:31.739828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.804 [2024-11-27 09:51:31.739888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:30.804 [2024-11-27 09:51:31.739906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.804 [2024-11-27 09:51:31.742783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.804 [2024-11-27 09:51:31.742837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:30.804 BaseBdev2 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 spare_malloc 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 spare_delay 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 [2024-11-27 09:51:31.826096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:30.804 [2024-11-27 09:51:31.826226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.804 [2024-11-27 09:51:31.826275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:30.804 [2024-11-27 09:51:31.826336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.804 [2024-11-27 09:51:31.828953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.804 [2024-11-27 09:51:31.829073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:30.804 spare 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.804 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.804 [2024-11-27 09:51:31.838146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.804 [2024-11-27 09:51:31.840379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.805 [2024-11-27 09:51:31.840560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:30.805 [2024-11-27 09:51:31.840602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:30.805 [2024-11-27 09:51:31.840963] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:30.805 [2024-11-27 09:51:31.841249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:30.805 [2024-11-27 09:51:31.841302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:30.805 [2024-11-27 09:51:31.841550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.805 "name": "raid_bdev1", 00:14:30.805 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:30.805 "strip_size_kb": 0, 00:14:30.805 "state": "online", 00:14:30.805 "raid_level": "raid1", 00:14:30.805 "superblock": false, 00:14:30.805 "num_base_bdevs": 2, 00:14:30.805 "num_base_bdevs_discovered": 2, 00:14:30.805 "num_base_bdevs_operational": 2, 00:14:30.805 "base_bdevs_list": [ 00:14:30.805 { 00:14:30.805 "name": "BaseBdev1", 00:14:30.805 "uuid": "7bf803b0-a967-5941-880c-5ce310060bae", 00:14:30.805 "is_configured": true, 00:14:30.805 "data_offset": 0, 00:14:30.805 "data_size": 65536 00:14:30.805 }, 00:14:30.805 { 00:14:30.805 "name": "BaseBdev2", 00:14:30.805 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:30.805 "is_configured": true, 00:14:30.805 "data_offset": 0, 00:14:30.805 "data_size": 65536 00:14:30.805 } 00:14:30.805 ] 00:14:30.805 }' 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.805 09:51:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.374 [2024-11-27 09:51:32.337615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.374 [2024-11-27 09:51:32.413215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.374 "name": "raid_bdev1", 00:14:31.374 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:31.374 "strip_size_kb": 0, 00:14:31.374 "state": "online", 00:14:31.374 "raid_level": "raid1", 00:14:31.374 "superblock": false, 00:14:31.374 "num_base_bdevs": 2, 00:14:31.374 "num_base_bdevs_discovered": 1, 00:14:31.374 "num_base_bdevs_operational": 1, 00:14:31.374 "base_bdevs_list": [ 00:14:31.374 { 00:14:31.374 "name": null, 00:14:31.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.374 "is_configured": false, 00:14:31.374 "data_offset": 0, 00:14:31.374 "data_size": 65536 00:14:31.374 }, 00:14:31.374 { 00:14:31.374 "name": "BaseBdev2", 00:14:31.374 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:31.374 "is_configured": true, 00:14:31.374 "data_offset": 0, 00:14:31.374 "data_size": 65536 00:14:31.374 } 00:14:31.374 ] 00:14:31.374 }' 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.374 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.635 [2024-11-27 09:51:32.514663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:31.635 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:31.635 Zero copy mechanism will not be used. 00:14:31.635 Running I/O for 60 seconds... 00:14:31.895 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:31.895 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.895 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.895 [2024-11-27 09:51:32.867921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:31.895 09:51:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.895 09:51:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:31.895 [2024-11-27 09:51:32.924701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:31.895 [2024-11-27 09:51:32.927213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:32.155 [2024-11-27 09:51:33.047559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.155 [2024-11-27 09:51:33.048564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:32.155 [2024-11-27 09:51:33.171255] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:32.155 [2024-11-27 09:51:33.171718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:32.414 179.00 IOPS, 537.00 MiB/s [2024-11-27T09:51:33.547Z] [2024-11-27 09:51:33.545204] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:32.985 [2024-11-27 09:51:33.871458] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.985 "name": "raid_bdev1", 00:14:32.985 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:32.985 "strip_size_kb": 0, 00:14:32.985 "state": "online", 00:14:32.985 "raid_level": "raid1", 00:14:32.985 "superblock": false, 00:14:32.985 "num_base_bdevs": 2, 00:14:32.985 "num_base_bdevs_discovered": 2, 00:14:32.985 "num_base_bdevs_operational": 2, 00:14:32.985 "process": { 00:14:32.985 "type": "rebuild", 00:14:32.985 "target": "spare", 00:14:32.985 "progress": { 00:14:32.985 "blocks": 14336, 00:14:32.985 "percent": 21 00:14:32.985 } 00:14:32.985 }, 00:14:32.985 "base_bdevs_list": [ 00:14:32.985 { 00:14:32.985 "name": "spare", 00:14:32.985 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:32.985 "is_configured": true, 00:14:32.985 "data_offset": 0, 00:14:32.985 "data_size": 65536 00:14:32.985 }, 00:14:32.985 { 00:14:32.985 "name": "BaseBdev2", 00:14:32.985 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:32.985 "is_configured": true, 00:14:32.985 "data_offset": 0, 00:14:32.985 "data_size": 65536 00:14:32.985 } 00:14:32.985 ] 00:14:32.985 }' 00:14:32.985 09:51:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.985 [2024-11-27 09:51:33.991189] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:32.985 [2024-11-27 09:51:33.991788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:32.985 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.985 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.985 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.985 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:32.985 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.985 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.985 [2024-11-27 09:51:34.078944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:32.985 [2024-11-27 09:51:34.113326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:33.246 [2024-11-27 09:51:34.226532] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.246 [2024-11-27 09:51:34.236666] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.246 [2024-11-27 09:51:34.236845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.246 [2024-11-27 09:51:34.236884] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.246 [2024-11-27 09:51:34.287520] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.246 "name": "raid_bdev1", 00:14:33.246 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:33.246 "strip_size_kb": 0, 00:14:33.246 "state": "online", 00:14:33.246 "raid_level": "raid1", 00:14:33.246 "superblock": false, 00:14:33.246 "num_base_bdevs": 2, 00:14:33.246 "num_base_bdevs_discovered": 1, 00:14:33.246 "num_base_bdevs_operational": 1, 00:14:33.246 "base_bdevs_list": [ 00:14:33.246 { 00:14:33.246 "name": null, 00:14:33.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.246 "is_configured": false, 00:14:33.246 "data_offset": 0, 00:14:33.246 "data_size": 65536 00:14:33.246 }, 00:14:33.246 { 00:14:33.246 "name": "BaseBdev2", 00:14:33.246 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:33.246 "is_configured": true, 00:14:33.246 "data_offset": 0, 00:14:33.246 "data_size": 65536 00:14:33.246 } 00:14:33.246 ] 00:14:33.246 }' 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.246 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.768 146.50 IOPS, 439.50 MiB/s [2024-11-27T09:51:34.901Z] 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.768 "name": "raid_bdev1", 00:14:33.768 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:33.768 "strip_size_kb": 0, 00:14:33.768 "state": "online", 00:14:33.768 "raid_level": "raid1", 00:14:33.768 "superblock": false, 00:14:33.768 "num_base_bdevs": 2, 00:14:33.768 "num_base_bdevs_discovered": 1, 00:14:33.768 "num_base_bdevs_operational": 1, 00:14:33.768 "base_bdevs_list": [ 00:14:33.768 { 00:14:33.768 "name": null, 00:14:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.768 "is_configured": false, 00:14:33.768 "data_offset": 0, 00:14:33.768 "data_size": 65536 00:14:33.768 }, 00:14:33.768 { 00:14:33.768 "name": "BaseBdev2", 00:14:33.768 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:33.768 "is_configured": true, 00:14:33.768 "data_offset": 0, 00:14:33.768 "data_size": 65536 00:14:33.768 } 00:14:33.768 ] 00:14:33.768 }' 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.768 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.028 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.028 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.028 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.028 [2024-11-27 09:51:34.910088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.028 09:51:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.028 09:51:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:34.028 [2024-11-27 09:51:34.969180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:34.028 [2024-11-27 09:51:34.971524] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:34.028 [2024-11-27 09:51:35.080626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:34.028 [2024-11-27 09:51:35.081651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:34.288 [2024-11-27 09:51:35.304150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.288 [2024-11-27 09:51:35.304705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.548 165.00 IOPS, 495.00 MiB/s [2024-11-27T09:51:35.681Z] [2024-11-27 09:51:35.550693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:34.548 [2024-11-27 09:51:35.551660] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:34.809 [2024-11-27 09:51:35.762161] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:34.809 [2024-11-27 09:51:35.762792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.069 09:51:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.069 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.069 "name": "raid_bdev1", 00:14:35.069 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:35.069 "strip_size_kb": 0, 00:14:35.069 "state": "online", 00:14:35.069 "raid_level": "raid1", 00:14:35.069 "superblock": false, 00:14:35.069 "num_base_bdevs": 2, 00:14:35.069 "num_base_bdevs_discovered": 2, 00:14:35.069 "num_base_bdevs_operational": 2, 00:14:35.069 "process": { 00:14:35.069 "type": "rebuild", 00:14:35.069 "target": "spare", 00:14:35.069 "progress": { 00:14:35.069 "blocks": 10240, 00:14:35.069 "percent": 15 00:14:35.070 } 00:14:35.070 }, 00:14:35.070 "base_bdevs_list": [ 00:14:35.070 { 00:14:35.070 "name": "spare", 00:14:35.070 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:35.070 "is_configured": true, 00:14:35.070 "data_offset": 0, 00:14:35.070 "data_size": 65536 00:14:35.070 }, 00:14:35.070 { 00:14:35.070 "name": "BaseBdev2", 00:14:35.070 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:35.070 "is_configured": true, 00:14:35.070 "data_offset": 0, 00:14:35.070 "data_size": 65536 00:14:35.070 } 00:14:35.070 ] 00:14:35.070 }' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.070 [2024-11-27 09:51:36.104218] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=411 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.070 "name": "raid_bdev1", 00:14:35.070 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:35.070 "strip_size_kb": 0, 00:14:35.070 "state": "online", 00:14:35.070 "raid_level": "raid1", 00:14:35.070 "superblock": false, 00:14:35.070 "num_base_bdevs": 2, 00:14:35.070 "num_base_bdevs_discovered": 2, 00:14:35.070 "num_base_bdevs_operational": 2, 00:14:35.070 "process": { 00:14:35.070 "type": "rebuild", 00:14:35.070 "target": "spare", 00:14:35.070 "progress": { 00:14:35.070 "blocks": 14336, 00:14:35.070 "percent": 21 00:14:35.070 } 00:14:35.070 }, 00:14:35.070 "base_bdevs_list": [ 00:14:35.070 { 00:14:35.070 "name": "spare", 00:14:35.070 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:35.070 "is_configured": true, 00:14:35.070 "data_offset": 0, 00:14:35.070 "data_size": 65536 00:14:35.070 }, 00:14:35.070 { 00:14:35.070 "name": "BaseBdev2", 00:14:35.070 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:35.070 "is_configured": true, 00:14:35.070 "data_offset": 0, 00:14:35.070 "data_size": 65536 00:14:35.070 } 00:14:35.070 ] 00:14:35.070 }' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.070 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.329 [2024-11-27 09:51:36.209439] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:35.329 [2024-11-27 09:51:36.210092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:35.329 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.329 09:51:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.617 156.75 IOPS, 470.25 MiB/s [2024-11-27T09:51:36.750Z] [2024-11-27 09:51:36.665319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:36.185 [2024-11-27 09:51:37.046445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:36.185 [2024-11-27 09:51:37.157423] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.185 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.186 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.186 09:51:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.186 09:51:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.186 09:51:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.186 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.186 "name": "raid_bdev1", 00:14:36.186 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:36.186 "strip_size_kb": 0, 00:14:36.186 "state": "online", 00:14:36.186 "raid_level": "raid1", 00:14:36.186 "superblock": false, 00:14:36.186 "num_base_bdevs": 2, 00:14:36.186 "num_base_bdevs_discovered": 2, 00:14:36.186 "num_base_bdevs_operational": 2, 00:14:36.186 "process": { 00:14:36.186 "type": "rebuild", 00:14:36.186 "target": "spare", 00:14:36.186 "progress": { 00:14:36.186 "blocks": 28672, 00:14:36.186 "percent": 43 00:14:36.186 } 00:14:36.186 }, 00:14:36.186 "base_bdevs_list": [ 00:14:36.186 { 00:14:36.186 "name": "spare", 00:14:36.186 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:36.186 "is_configured": true, 00:14:36.186 "data_offset": 0, 00:14:36.186 "data_size": 65536 00:14:36.186 }, 00:14:36.186 { 00:14:36.186 "name": "BaseBdev2", 00:14:36.186 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:36.186 "is_configured": true, 00:14:36.186 "data_offset": 0, 00:14:36.186 "data_size": 65536 00:14:36.186 } 00:14:36.186 ] 00:14:36.186 }' 00:14:36.186 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.452 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.452 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.452 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.452 09:51:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.035 138.60 IOPS, 415.80 MiB/s [2024-11-27T09:51:38.168Z] [2024-11-27 09:51:37.859426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:37.035 [2024-11-27 09:51:37.967650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.295 09:51:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.555 09:51:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.555 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.555 "name": "raid_bdev1", 00:14:37.555 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:37.555 "strip_size_kb": 0, 00:14:37.555 "state": "online", 00:14:37.555 "raid_level": "raid1", 00:14:37.555 "superblock": false, 00:14:37.555 "num_base_bdevs": 2, 00:14:37.555 "num_base_bdevs_discovered": 2, 00:14:37.555 "num_base_bdevs_operational": 2, 00:14:37.555 "process": { 00:14:37.555 "type": "rebuild", 00:14:37.555 "target": "spare", 00:14:37.555 "progress": { 00:14:37.555 "blocks": 47104, 00:14:37.555 "percent": 71 00:14:37.555 } 00:14:37.555 }, 00:14:37.555 "base_bdevs_list": [ 00:14:37.555 { 00:14:37.555 "name": "spare", 00:14:37.555 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 65536 00:14:37.555 }, 00:14:37.555 { 00:14:37.555 "name": "BaseBdev2", 00:14:37.555 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:37.555 "is_configured": true, 00:14:37.555 "data_offset": 0, 00:14:37.555 "data_size": 65536 00:14:37.555 } 00:14:37.555 ] 00:14:37.555 }' 00:14:37.555 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.555 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.555 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.555 123.00 IOPS, 369.00 MiB/s [2024-11-27T09:51:38.688Z] 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.555 09:51:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.555 [2024-11-27 09:51:38.628192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:38.124 [2024-11-27 09:51:39.180556] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:38.690 109.71 IOPS, 329.14 MiB/s [2024-11-27T09:51:39.823Z] 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.690 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.690 "name": "raid_bdev1", 00:14:38.690 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:38.690 "strip_size_kb": 0, 00:14:38.690 "state": "online", 00:14:38.690 "raid_level": "raid1", 00:14:38.690 "superblock": false, 00:14:38.690 "num_base_bdevs": 2, 00:14:38.690 "num_base_bdevs_discovered": 2, 00:14:38.690 "num_base_bdevs_operational": 2, 00:14:38.690 "process": { 00:14:38.690 "type": "rebuild", 00:14:38.690 "target": "spare", 00:14:38.690 "progress": { 00:14:38.690 "blocks": 63488, 00:14:38.691 "percent": 96 00:14:38.691 } 00:14:38.691 }, 00:14:38.691 "base_bdevs_list": [ 00:14:38.691 { 00:14:38.691 "name": "spare", 00:14:38.691 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:38.691 "is_configured": true, 00:14:38.691 "data_offset": 0, 00:14:38.691 "data_size": 65536 00:14:38.691 }, 00:14:38.691 { 00:14:38.691 "name": "BaseBdev2", 00:14:38.691 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:38.691 "is_configured": true, 00:14:38.691 "data_offset": 0, 00:14:38.691 "data_size": 65536 00:14:38.691 } 00:14:38.691 ] 00:14:38.691 }' 00:14:38.691 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.691 [2024-11-27 09:51:39.607627] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:38.691 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.691 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.691 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.691 09:51:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.691 [2024-11-27 09:51:39.712674] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:38.691 [2024-11-27 09:51:39.716843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.630 100.25 IOPS, 300.75 MiB/s [2024-11-27T09:51:40.763Z] 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.630 "name": "raid_bdev1", 00:14:39.630 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:39.630 "strip_size_kb": 0, 00:14:39.630 "state": "online", 00:14:39.630 "raid_level": "raid1", 00:14:39.630 "superblock": false, 00:14:39.630 "num_base_bdevs": 2, 00:14:39.630 "num_base_bdevs_discovered": 2, 00:14:39.630 "num_base_bdevs_operational": 2, 00:14:39.630 "base_bdevs_list": [ 00:14:39.630 { 00:14:39.630 "name": "spare", 00:14:39.630 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:39.630 "is_configured": true, 00:14:39.630 "data_offset": 0, 00:14:39.630 "data_size": 65536 00:14:39.630 }, 00:14:39.630 { 00:14:39.630 "name": "BaseBdev2", 00:14:39.630 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:39.630 "is_configured": true, 00:14:39.630 "data_offset": 0, 00:14:39.630 "data_size": 65536 00:14:39.630 } 00:14:39.630 ] 00:14:39.630 }' 00:14:39.630 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.890 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.890 "name": "raid_bdev1", 00:14:39.890 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:39.890 "strip_size_kb": 0, 00:14:39.890 "state": "online", 00:14:39.890 "raid_level": "raid1", 00:14:39.890 "superblock": false, 00:14:39.890 "num_base_bdevs": 2, 00:14:39.890 "num_base_bdevs_discovered": 2, 00:14:39.890 "num_base_bdevs_operational": 2, 00:14:39.890 "base_bdevs_list": [ 00:14:39.890 { 00:14:39.890 "name": "spare", 00:14:39.890 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:39.890 "is_configured": true, 00:14:39.890 "data_offset": 0, 00:14:39.890 "data_size": 65536 00:14:39.890 }, 00:14:39.890 { 00:14:39.890 "name": "BaseBdev2", 00:14:39.890 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:39.890 "is_configured": true, 00:14:39.890 "data_offset": 0, 00:14:39.890 "data_size": 65536 00:14:39.890 } 00:14:39.891 ] 00:14:39.891 }' 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.891 "name": "raid_bdev1", 00:14:39.891 "uuid": "5cd5c0cf-cc4c-4fd0-9eb4-053e69128146", 00:14:39.891 "strip_size_kb": 0, 00:14:39.891 "state": "online", 00:14:39.891 "raid_level": "raid1", 00:14:39.891 "superblock": false, 00:14:39.891 "num_base_bdevs": 2, 00:14:39.891 "num_base_bdevs_discovered": 2, 00:14:39.891 "num_base_bdevs_operational": 2, 00:14:39.891 "base_bdevs_list": [ 00:14:39.891 { 00:14:39.891 "name": "spare", 00:14:39.891 "uuid": "988f21ad-6a05-5280-9588-f0ed05cb885b", 00:14:39.891 "is_configured": true, 00:14:39.891 "data_offset": 0, 00:14:39.891 "data_size": 65536 00:14:39.891 }, 00:14:39.891 { 00:14:39.891 "name": "BaseBdev2", 00:14:39.891 "uuid": "745f80a7-d4ac-57ff-879e-e41419acbb33", 00:14:39.891 "is_configured": true, 00:14:39.891 "data_offset": 0, 00:14:39.891 "data_size": 65536 00:14:39.891 } 00:14:39.891 ] 00:14:39.891 }' 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.891 09:51:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.462 [2024-11-27 09:51:41.374970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.462 [2024-11-27 09:51:41.375080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.462 00:14:40.462 Latency(us) 00:14:40.462 [2024-11-27T09:51:41.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.462 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:40.462 raid_bdev1 : 8.96 93.76 281.29 0.00 0.00 15076.02 314.80 114015.47 00:14:40.462 [2024-11-27T09:51:41.595Z] =================================================================================================================== 00:14:40.462 [2024-11-27T09:51:41.595Z] Total : 93.76 281.29 0.00 0.00 15076.02 314.80 114015.47 00:14:40.462 [2024-11-27 09:51:41.480114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.462 [2024-11-27 09:51:41.480259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.462 [2024-11-27 09:51:41.480366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:14:40.462 "results": [ 00:14:40.462 { 00:14:40.462 "job": "raid_bdev1", 00:14:40.462 "core_mask": "0x1", 00:14:40.462 "workload": "randrw", 00:14:40.462 "percentage": 50, 00:14:40.462 "status": "finished", 00:14:40.462 "queue_depth": 2, 00:14:40.462 "io_size": 3145728, 00:14:40.462 "runtime": 8.958725, 00:14:40.462 "iops": 93.76334243991192, 00:14:40.462 "mibps": 281.29002731973577, 00:14:40.462 "io_failed": 0, 00:14:40.462 "io_timeout": 0, 00:14:40.462 "avg_latency_us": 15076.020461634436, 00:14:40.462 "min_latency_us": 314.80174672489085, 00:14:40.462 "max_latency_us": 114015.46899563319 00:14:40.462 } 00:14:40.462 ], 00:14:40.462 "core_count": 1 00:14:40.462 } 00:14:40.462 ee all in destruct 00:14:40.462 [2024-11-27 09:51:41.480443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.462 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:40.722 /dev/nbd0 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.722 1+0 records in 00:14:40.722 1+0 records out 00:14:40.722 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395031 s, 10.4 MB/s 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.722 09:51:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:40.982 /dev/nbd1 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:40.982 1+0 records in 00:14:40.982 1+0 records out 00:14:40.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599465 s, 6.8 MB/s 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:40.982 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.241 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.498 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76731 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76731 ']' 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76731 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76731 00:14:41.758 killing process with pid 76731 00:14:41.758 Received shutdown signal, test time was about 10.233966 seconds 00:14:41.758 00:14:41.758 Latency(us) 00:14:41.758 [2024-11-27T09:51:42.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.758 [2024-11-27T09:51:42.891Z] =================================================================================================================== 00:14:41.758 [2024-11-27T09:51:42.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76731' 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76731 00:14:41.758 [2024-11-27 09:51:42.731563] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:41.758 09:51:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76731 00:14:42.017 [2024-11-27 09:51:42.978606] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:43.399 00:14:43.399 real 0m13.566s 00:14:43.399 user 0m16.662s 00:14:43.399 sys 0m1.697s 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.399 ************************************ 00:14:43.399 END TEST raid_rebuild_test_io 00:14:43.399 ************************************ 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.399 09:51:44 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:14:43.399 09:51:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:43.399 09:51:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.399 09:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.399 ************************************ 00:14:43.399 START TEST raid_rebuild_test_sb_io 00:14:43.399 ************************************ 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77133 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77133 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77133 ']' 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.399 09:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.399 [2024-11-27 09:51:44.434312] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:14:43.399 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:43.399 Zero copy mechanism will not be used. 00:14:43.399 [2024-11-27 09:51:44.434599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77133 ] 00:14:43.659 [2024-11-27 09:51:44.612129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.659 [2024-11-27 09:51:44.748451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.918 [2024-11-27 09:51:44.982800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.918 [2024-11-27 09:51:44.982890] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.178 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.178 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:44.178 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.178 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:44.178 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.178 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 BaseBdev1_malloc 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 [2024-11-27 09:51:45.326401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:44.439 [2024-11-27 09:51:45.326551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.439 [2024-11-27 09:51:45.326603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:44.439 [2024-11-27 09:51:45.326645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.439 [2024-11-27 09:51:45.329220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.439 [2024-11-27 09:51:45.329311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:44.439 BaseBdev1 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 BaseBdev2_malloc 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 [2024-11-27 09:51:45.388898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:44.439 [2024-11-27 09:51:45.388977] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.439 [2024-11-27 09:51:45.389018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:44.439 [2024-11-27 09:51:45.389033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.439 [2024-11-27 09:51:45.391471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.439 [2024-11-27 09:51:45.391516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:44.439 BaseBdev2 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 spare_malloc 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 spare_delay 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 [2024-11-27 09:51:45.473261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:44.439 [2024-11-27 09:51:45.473412] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.439 [2024-11-27 09:51:45.473461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:44.439 [2024-11-27 09:51:45.473504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.439 [2024-11-27 09:51:45.476057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.439 [2024-11-27 09:51:45.476141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:44.439 spare 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 [2024-11-27 09:51:45.485297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.439 [2024-11-27 09:51:45.487441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:44.439 [2024-11-27 09:51:45.487700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:44.439 [2024-11-27 09:51:45.487724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:44.439 [2024-11-27 09:51:45.487981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:44.439 [2024-11-27 09:51:45.488224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:44.439 [2024-11-27 09:51:45.488236] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:44.439 [2024-11-27 09:51:45.488416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.439 "name": "raid_bdev1", 00:14:44.439 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:44.439 "strip_size_kb": 0, 00:14:44.439 "state": "online", 00:14:44.439 "raid_level": "raid1", 00:14:44.439 "superblock": true, 00:14:44.439 "num_base_bdevs": 2, 00:14:44.439 "num_base_bdevs_discovered": 2, 00:14:44.439 "num_base_bdevs_operational": 2, 00:14:44.439 "base_bdevs_list": [ 00:14:44.439 { 00:14:44.439 "name": "BaseBdev1", 00:14:44.439 "uuid": "c4786bb8-f2c6-5670-a267-ae1ac9fcc823", 00:14:44.439 "is_configured": true, 00:14:44.439 "data_offset": 2048, 00:14:44.439 "data_size": 63488 00:14:44.439 }, 00:14:44.439 { 00:14:44.439 "name": "BaseBdev2", 00:14:44.439 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:44.439 "is_configured": true, 00:14:44.439 "data_offset": 2048, 00:14:44.439 "data_size": 63488 00:14:44.439 } 00:14:44.439 ] 00:14:44.439 }' 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.439 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 [2024-11-27 09:51:45.952871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 09:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 [2024-11-27 09:51:46.036396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.009 "name": "raid_bdev1", 00:14:45.009 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:45.009 "strip_size_kb": 0, 00:14:45.009 "state": "online", 00:14:45.009 "raid_level": "raid1", 00:14:45.009 "superblock": true, 00:14:45.009 "num_base_bdevs": 2, 00:14:45.009 "num_base_bdevs_discovered": 1, 00:14:45.009 "num_base_bdevs_operational": 1, 00:14:45.009 "base_bdevs_list": [ 00:14:45.009 { 00:14:45.009 "name": null, 00:14:45.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.009 "is_configured": false, 00:14:45.009 "data_offset": 0, 00:14:45.009 "data_size": 63488 00:14:45.009 }, 00:14:45.009 { 00:14:45.009 "name": "BaseBdev2", 00:14:45.009 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:45.009 "is_configured": true, 00:14:45.009 "data_offset": 2048, 00:14:45.009 "data_size": 63488 00:14:45.009 } 00:14:45.009 ] 00:14:45.009 }' 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.009 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.009 [2024-11-27 09:51:46.134030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:45.009 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:45.009 Zero copy mechanism will not be used. 00:14:45.009 Running I/O for 60 seconds... 00:14:45.579 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:45.579 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.579 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.579 [2024-11-27 09:51:46.454417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.579 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.579 09:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:45.579 [2024-11-27 09:51:46.514529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:45.579 [2024-11-27 09:51:46.516875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.579 [2024-11-27 09:51:46.629674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.579 [2024-11-27 09:51:46.630538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:45.839 [2024-11-27 09:51:46.847248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:45.839 [2024-11-27 09:51:46.847852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:46.360 133.00 IOPS, 399.00 MiB/s [2024-11-27T09:51:47.493Z] [2024-11-27 09:51:47.321760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.620 "name": "raid_bdev1", 00:14:46.620 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:46.620 "strip_size_kb": 0, 00:14:46.620 "state": "online", 00:14:46.620 "raid_level": "raid1", 00:14:46.620 "superblock": true, 00:14:46.620 "num_base_bdevs": 2, 00:14:46.620 "num_base_bdevs_discovered": 2, 00:14:46.620 "num_base_bdevs_operational": 2, 00:14:46.620 "process": { 00:14:46.620 "type": "rebuild", 00:14:46.620 "target": "spare", 00:14:46.620 "progress": { 00:14:46.620 "blocks": 10240, 00:14:46.620 "percent": 16 00:14:46.620 } 00:14:46.620 }, 00:14:46.620 "base_bdevs_list": [ 00:14:46.620 { 00:14:46.620 "name": "spare", 00:14:46.620 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:46.620 "is_configured": true, 00:14:46.620 "data_offset": 2048, 00:14:46.620 "data_size": 63488 00:14:46.620 }, 00:14:46.620 { 00:14:46.620 "name": "BaseBdev2", 00:14:46.620 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:46.620 "is_configured": true, 00:14:46.620 "data_offset": 2048, 00:14:46.620 "data_size": 63488 00:14:46.620 } 00:14:46.620 ] 00:14:46.620 }' 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.620 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.620 [2024-11-27 09:51:47.643908] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.620 [2024-11-27 09:51:47.743668] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:46.880 [2024-11-27 09:51:47.751880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.880 [2024-11-27 09:51:47.751931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.880 [2024-11-27 09:51:47.751945] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:46.880 [2024-11-27 09:51:47.795306] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.880 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.880 "name": "raid_bdev1", 00:14:46.880 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:46.880 "strip_size_kb": 0, 00:14:46.880 "state": "online", 00:14:46.880 "raid_level": "raid1", 00:14:46.880 "superblock": true, 00:14:46.880 "num_base_bdevs": 2, 00:14:46.880 "num_base_bdevs_discovered": 1, 00:14:46.880 "num_base_bdevs_operational": 1, 00:14:46.880 "base_bdevs_list": [ 00:14:46.880 { 00:14:46.880 "name": null, 00:14:46.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.880 "is_configured": false, 00:14:46.880 "data_offset": 0, 00:14:46.880 "data_size": 63488 00:14:46.880 }, 00:14:46.880 { 00:14:46.880 "name": "BaseBdev2", 00:14:46.880 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:46.881 "is_configured": true, 00:14:46.881 "data_offset": 2048, 00:14:46.881 "data_size": 63488 00:14:46.881 } 00:14:46.881 ] 00:14:46.881 }' 00:14:46.881 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.881 09:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.141 146.00 IOPS, 438.00 MiB/s [2024-11-27T09:51:48.274Z] 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.141 "name": "raid_bdev1", 00:14:47.141 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:47.141 "strip_size_kb": 0, 00:14:47.141 "state": "online", 00:14:47.141 "raid_level": "raid1", 00:14:47.141 "superblock": true, 00:14:47.141 "num_base_bdevs": 2, 00:14:47.141 "num_base_bdevs_discovered": 1, 00:14:47.141 "num_base_bdevs_operational": 1, 00:14:47.141 "base_bdevs_list": [ 00:14:47.141 { 00:14:47.141 "name": null, 00:14:47.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.141 "is_configured": false, 00:14:47.141 "data_offset": 0, 00:14:47.141 "data_size": 63488 00:14:47.141 }, 00:14:47.141 { 00:14:47.141 "name": "BaseBdev2", 00:14:47.141 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:47.141 "is_configured": true, 00:14:47.141 "data_offset": 2048, 00:14:47.141 "data_size": 63488 00:14:47.141 } 00:14:47.141 ] 00:14:47.141 }' 00:14:47.141 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.400 [2024-11-27 09:51:48.348492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.400 09:51:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:47.400 [2024-11-27 09:51:48.400325] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:47.400 [2024-11-27 09:51:48.402608] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.400 [2024-11-27 09:51:48.505658] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.400 [2024-11-27 09:51:48.506607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:47.660 [2024-11-27 09:51:48.728769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:47.660 [2024-11-27 09:51:48.729481] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:48.232 [2024-11-27 09:51:49.079711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.232 [2024-11-27 09:51:49.080665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:48.232 163.00 IOPS, 489.00 MiB/s [2024-11-27T09:51:49.365Z] [2024-11-27 09:51:49.284487] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.232 [2024-11-27 09:51:49.284947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.493 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.493 "name": "raid_bdev1", 00:14:48.493 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:48.493 "strip_size_kb": 0, 00:14:48.493 "state": "online", 00:14:48.493 "raid_level": "raid1", 00:14:48.493 "superblock": true, 00:14:48.493 "num_base_bdevs": 2, 00:14:48.493 "num_base_bdevs_discovered": 2, 00:14:48.493 "num_base_bdevs_operational": 2, 00:14:48.493 "process": { 00:14:48.493 "type": "rebuild", 00:14:48.493 "target": "spare", 00:14:48.493 "progress": { 00:14:48.493 "blocks": 10240, 00:14:48.493 "percent": 16 00:14:48.493 } 00:14:48.493 }, 00:14:48.493 "base_bdevs_list": [ 00:14:48.493 { 00:14:48.493 "name": "spare", 00:14:48.493 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:48.493 "is_configured": true, 00:14:48.493 "data_offset": 2048, 00:14:48.493 "data_size": 63488 00:14:48.493 }, 00:14:48.493 { 00:14:48.493 "name": "BaseBdev2", 00:14:48.493 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:48.493 "is_configured": true, 00:14:48.493 "data_offset": 2048, 00:14:48.493 "data_size": 63488 00:14:48.493 } 00:14:48.494 ] 00:14:48.494 }' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:48.494 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=424 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.494 [2024-11-27 09:51:49.536367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.494 "name": "raid_bdev1", 00:14:48.494 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:48.494 "strip_size_kb": 0, 00:14:48.494 "state": "online", 00:14:48.494 "raid_level": "raid1", 00:14:48.494 "superblock": true, 00:14:48.494 "num_base_bdevs": 2, 00:14:48.494 "num_base_bdevs_discovered": 2, 00:14:48.494 "num_base_bdevs_operational": 2, 00:14:48.494 "process": { 00:14:48.494 "type": "rebuild", 00:14:48.494 "target": "spare", 00:14:48.494 "progress": { 00:14:48.494 "blocks": 14336, 00:14:48.494 "percent": 22 00:14:48.494 } 00:14:48.494 }, 00:14:48.494 "base_bdevs_list": [ 00:14:48.494 { 00:14:48.494 "name": "spare", 00:14:48.494 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:48.494 "is_configured": true, 00:14:48.494 "data_offset": 2048, 00:14:48.494 "data_size": 63488 00:14:48.494 }, 00:14:48.494 { 00:14:48.494 "name": "BaseBdev2", 00:14:48.494 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:48.494 "is_configured": true, 00:14:48.494 "data_offset": 2048, 00:14:48.494 "data_size": 63488 00:14:48.494 } 00:14:48.494 ] 00:14:48.494 }' 00:14:48.494 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.754 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.754 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.754 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.754 09:51:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.754 [2024-11-27 09:51:49.754077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:49.273 143.75 IOPS, 431.25 MiB/s [2024-11-27T09:51:50.406Z] [2024-11-27 09:51:50.242353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:49.273 [2024-11-27 09:51:50.242766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:49.532 [2024-11-27 09:51:50.577437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:49.532 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.532 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.532 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.532 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.532 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.532 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.804 "name": "raid_bdev1", 00:14:49.804 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:49.804 "strip_size_kb": 0, 00:14:49.804 "state": "online", 00:14:49.804 "raid_level": "raid1", 00:14:49.804 "superblock": true, 00:14:49.804 "num_base_bdevs": 2, 00:14:49.804 "num_base_bdevs_discovered": 2, 00:14:49.804 "num_base_bdevs_operational": 2, 00:14:49.804 "process": { 00:14:49.804 "type": "rebuild", 00:14:49.804 "target": "spare", 00:14:49.804 "progress": { 00:14:49.804 "blocks": 26624, 00:14:49.804 "percent": 41 00:14:49.804 } 00:14:49.804 }, 00:14:49.804 "base_bdevs_list": [ 00:14:49.804 { 00:14:49.804 "name": "spare", 00:14:49.804 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:49.804 "is_configured": true, 00:14:49.804 "data_offset": 2048, 00:14:49.804 "data_size": 63488 00:14:49.804 }, 00:14:49.804 { 00:14:49.804 "name": "BaseBdev2", 00:14:49.804 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:49.804 "is_configured": true, 00:14:49.804 "data_offset": 2048, 00:14:49.804 "data_size": 63488 00:14:49.804 } 00:14:49.804 ] 00:14:49.804 }' 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.804 09:51:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:50.094 [2024-11-27 09:51:51.025062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:50.094 [2024-11-27 09:51:51.025496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:50.368 128.80 IOPS, 386.40 MiB/s [2024-11-27T09:51:51.501Z] [2024-11-27 09:51:51.362674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:50.948 [2024-11-27 09:51:51.805411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.948 "name": "raid_bdev1", 00:14:50.948 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:50.948 "strip_size_kb": 0, 00:14:50.948 "state": "online", 00:14:50.948 "raid_level": "raid1", 00:14:50.948 "superblock": true, 00:14:50.948 "num_base_bdevs": 2, 00:14:50.948 "num_base_bdevs_discovered": 2, 00:14:50.948 "num_base_bdevs_operational": 2, 00:14:50.948 "process": { 00:14:50.948 "type": "rebuild", 00:14:50.948 "target": "spare", 00:14:50.948 "progress": { 00:14:50.948 "blocks": 47104, 00:14:50.948 "percent": 74 00:14:50.948 } 00:14:50.948 }, 00:14:50.948 "base_bdevs_list": [ 00:14:50.948 { 00:14:50.948 "name": "spare", 00:14:50.948 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:50.948 "is_configured": true, 00:14:50.948 "data_offset": 2048, 00:14:50.948 "data_size": 63488 00:14:50.948 }, 00:14:50.948 { 00:14:50.948 "name": "BaseBdev2", 00:14:50.948 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:50.948 "is_configured": true, 00:14:50.948 "data_offset": 2048, 00:14:50.948 "data_size": 63488 00:14:50.948 } 00:14:50.948 ] 00:14:50.948 }' 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.948 09:51:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:51.208 114.00 IOPS, 342.00 MiB/s [2024-11-27T09:51:52.341Z] [2024-11-27 09:51:52.150373] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:51.777 [2024-11-27 09:51:52.696357] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:51.777 [2024-11-27 09:51:52.801190] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:51.777 [2024-11-27 09:51:52.805110] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.037 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.037 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.037 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.037 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.037 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.037 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.038 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.038 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.038 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.038 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.038 09:51:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.038 "name": "raid_bdev1", 00:14:52.038 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:52.038 "strip_size_kb": 0, 00:14:52.038 "state": "online", 00:14:52.038 "raid_level": "raid1", 00:14:52.038 "superblock": true, 00:14:52.038 "num_base_bdevs": 2, 00:14:52.038 "num_base_bdevs_discovered": 2, 00:14:52.038 "num_base_bdevs_operational": 2, 00:14:52.038 "base_bdevs_list": [ 00:14:52.038 { 00:14:52.038 "name": "spare", 00:14:52.038 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:52.038 "is_configured": true, 00:14:52.038 "data_offset": 2048, 00:14:52.038 "data_size": 63488 00:14:52.038 }, 00:14:52.038 { 00:14:52.038 "name": "BaseBdev2", 00:14:52.038 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:52.038 "is_configured": true, 00:14:52.038 "data_offset": 2048, 00:14:52.038 "data_size": 63488 00:14:52.038 } 00:14:52.038 ] 00:14:52.038 }' 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.038 103.14 IOPS, 309.43 MiB/s [2024-11-27T09:51:53.171Z] 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.038 "name": "raid_bdev1", 00:14:52.038 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:52.038 "strip_size_kb": 0, 00:14:52.038 "state": "online", 00:14:52.038 "raid_level": "raid1", 00:14:52.038 "superblock": true, 00:14:52.038 "num_base_bdevs": 2, 00:14:52.038 "num_base_bdevs_discovered": 2, 00:14:52.038 "num_base_bdevs_operational": 2, 00:14:52.038 "base_bdevs_list": [ 00:14:52.038 { 00:14:52.038 "name": "spare", 00:14:52.038 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:52.038 "is_configured": true, 00:14:52.038 "data_offset": 2048, 00:14:52.038 "data_size": 63488 00:14:52.038 }, 00:14:52.038 { 00:14:52.038 "name": "BaseBdev2", 00:14:52.038 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:52.038 "is_configured": true, 00:14:52.038 "data_offset": 2048, 00:14:52.038 "data_size": 63488 00:14:52.038 } 00:14:52.038 ] 00:14:52.038 }' 00:14:52.038 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.298 "name": "raid_bdev1", 00:14:52.298 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:52.298 "strip_size_kb": 0, 00:14:52.298 "state": "online", 00:14:52.298 "raid_level": "raid1", 00:14:52.298 "superblock": true, 00:14:52.298 "num_base_bdevs": 2, 00:14:52.298 "num_base_bdevs_discovered": 2, 00:14:52.298 "num_base_bdevs_operational": 2, 00:14:52.298 "base_bdevs_list": [ 00:14:52.298 { 00:14:52.298 "name": "spare", 00:14:52.298 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:52.298 "is_configured": true, 00:14:52.298 "data_offset": 2048, 00:14:52.298 "data_size": 63488 00:14:52.298 }, 00:14:52.298 { 00:14:52.298 "name": "BaseBdev2", 00:14:52.298 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:52.298 "is_configured": true, 00:14:52.298 "data_offset": 2048, 00:14:52.298 "data_size": 63488 00:14:52.298 } 00:14:52.298 ] 00:14:52.298 }' 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.298 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.558 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:52.558 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.558 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.558 [2024-11-27 09:51:53.635720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.558 [2024-11-27 09:51:53.635819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.558 00:14:52.558 Latency(us) 00:14:52.558 [2024-11-27T09:51:53.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.558 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:52.558 raid_bdev1 : 7.55 98.42 295.25 0.00 0.00 13422.32 305.86 113557.58 00:14:52.558 [2024-11-27T09:51:53.691Z] =================================================================================================================== 00:14:52.558 [2024-11-27T09:51:53.691Z] Total : 98.42 295.25 0.00 0.00 13422.32 305.86 113557.58 00:14:52.817 [2024-11-27 09:51:53.691258] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.818 [2024-11-27 09:51:53.691409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.818 [2024-11-27 09:51:53.691528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.818 [2024-11-27 09:51:53.691593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:52.818 { 00:14:52.818 "results": [ 00:14:52.818 { 00:14:52.818 "job": "raid_bdev1", 00:14:52.818 "core_mask": "0x1", 00:14:52.818 "workload": "randrw", 00:14:52.818 "percentage": 50, 00:14:52.818 "status": "finished", 00:14:52.818 "queue_depth": 2, 00:14:52.818 "io_size": 3145728, 00:14:52.818 "runtime": 7.54963, 00:14:52.818 "iops": 98.41541903378047, 00:14:52.818 "mibps": 295.24625710134137, 00:14:52.818 "io_failed": 0, 00:14:52.818 "io_timeout": 0, 00:14:52.818 "avg_latency_us": 13422.317106972208, 00:14:52.818 "min_latency_us": 305.8585152838428, 00:14:52.818 "max_latency_us": 113557.57554585153 00:14:52.818 } 00:14:52.818 ], 00:14:52.818 "core_count": 1 00:14:52.818 } 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.818 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:53.078 /dev/nbd0 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.078 1+0 records in 00:14:53.078 1+0 records out 00:14:53.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283588 s, 14.4 MB/s 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.078 09:51:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:53.078 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.078 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:53.078 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.078 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.078 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:53.078 /dev/nbd1 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:53.338 1+0 records in 00:14:53.338 1+0 records out 00:14:53.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049596 s, 8.3 MB/s 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.338 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.598 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.858 [2024-11-27 09:51:54.896231] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:53.858 [2024-11-27 09:51:54.896366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.858 [2024-11-27 09:51:54.896442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:53.858 [2024-11-27 09:51:54.896486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.858 [2024-11-27 09:51:54.899131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.858 [2024-11-27 09:51:54.899234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:53.858 [2024-11-27 09:51:54.899409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:53.858 [2024-11-27 09:51:54.899506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.858 [2024-11-27 09:51:54.899708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.858 spare 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.858 09:51:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.119 [2024-11-27 09:51:54.999675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:54.119 [2024-11-27 09:51:54.999744] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:54.119 [2024-11-27 09:51:55.000114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:54.119 [2024-11-27 09:51:55.000346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:54.119 [2024-11-27 09:51:55.000413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:54.119 [2024-11-27 09:51:55.000674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.119 "name": "raid_bdev1", 00:14:54.119 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:54.119 "strip_size_kb": 0, 00:14:54.119 "state": "online", 00:14:54.119 "raid_level": "raid1", 00:14:54.119 "superblock": true, 00:14:54.119 "num_base_bdevs": 2, 00:14:54.119 "num_base_bdevs_discovered": 2, 00:14:54.119 "num_base_bdevs_operational": 2, 00:14:54.119 "base_bdevs_list": [ 00:14:54.119 { 00:14:54.119 "name": "spare", 00:14:54.119 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:54.119 "is_configured": true, 00:14:54.119 "data_offset": 2048, 00:14:54.119 "data_size": 63488 00:14:54.119 }, 00:14:54.119 { 00:14:54.119 "name": "BaseBdev2", 00:14:54.119 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:54.119 "is_configured": true, 00:14:54.119 "data_offset": 2048, 00:14:54.119 "data_size": 63488 00:14:54.119 } 00:14:54.119 ] 00:14:54.119 }' 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.119 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.378 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.638 "name": "raid_bdev1", 00:14:54.638 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:54.638 "strip_size_kb": 0, 00:14:54.638 "state": "online", 00:14:54.638 "raid_level": "raid1", 00:14:54.638 "superblock": true, 00:14:54.638 "num_base_bdevs": 2, 00:14:54.638 "num_base_bdevs_discovered": 2, 00:14:54.638 "num_base_bdevs_operational": 2, 00:14:54.638 "base_bdevs_list": [ 00:14:54.638 { 00:14:54.638 "name": "spare", 00:14:54.638 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:54.638 "is_configured": true, 00:14:54.638 "data_offset": 2048, 00:14:54.638 "data_size": 63488 00:14:54.638 }, 00:14:54.638 { 00:14:54.638 "name": "BaseBdev2", 00:14:54.638 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:54.638 "is_configured": true, 00:14:54.638 "data_offset": 2048, 00:14:54.638 "data_size": 63488 00:14:54.638 } 00:14:54.638 ] 00:14:54.638 }' 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.638 [2024-11-27 09:51:55.699632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.638 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.639 "name": "raid_bdev1", 00:14:54.639 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:54.639 "strip_size_kb": 0, 00:14:54.639 "state": "online", 00:14:54.639 "raid_level": "raid1", 00:14:54.639 "superblock": true, 00:14:54.639 "num_base_bdevs": 2, 00:14:54.639 "num_base_bdevs_discovered": 1, 00:14:54.639 "num_base_bdevs_operational": 1, 00:14:54.639 "base_bdevs_list": [ 00:14:54.639 { 00:14:54.639 "name": null, 00:14:54.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.639 "is_configured": false, 00:14:54.639 "data_offset": 0, 00:14:54.639 "data_size": 63488 00:14:54.639 }, 00:14:54.639 { 00:14:54.639 "name": "BaseBdev2", 00:14:54.639 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:54.639 "is_configured": true, 00:14:54.639 "data_offset": 2048, 00:14:54.639 "data_size": 63488 00:14:54.639 } 00:14:54.639 ] 00:14:54.639 }' 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.639 09:51:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.209 09:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.209 09:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.209 09:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.209 [2024-11-27 09:51:56.123116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.209 [2024-11-27 09:51:56.123456] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:55.209 [2024-11-27 09:51:56.123478] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:55.209 [2024-11-27 09:51:56.123536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.209 [2024-11-27 09:51:56.142148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:55.209 09:51:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.209 09:51:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:55.209 [2024-11-27 09:51:56.144471] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.148 "name": "raid_bdev1", 00:14:56.148 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:56.148 "strip_size_kb": 0, 00:14:56.148 "state": "online", 00:14:56.148 "raid_level": "raid1", 00:14:56.148 "superblock": true, 00:14:56.148 "num_base_bdevs": 2, 00:14:56.148 "num_base_bdevs_discovered": 2, 00:14:56.148 "num_base_bdevs_operational": 2, 00:14:56.148 "process": { 00:14:56.148 "type": "rebuild", 00:14:56.148 "target": "spare", 00:14:56.148 "progress": { 00:14:56.148 "blocks": 20480, 00:14:56.148 "percent": 32 00:14:56.148 } 00:14:56.148 }, 00:14:56.148 "base_bdevs_list": [ 00:14:56.148 { 00:14:56.148 "name": "spare", 00:14:56.148 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:56.148 "is_configured": true, 00:14:56.148 "data_offset": 2048, 00:14:56.148 "data_size": 63488 00:14:56.148 }, 00:14:56.148 { 00:14:56.148 "name": "BaseBdev2", 00:14:56.148 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:56.148 "is_configured": true, 00:14:56.148 "data_offset": 2048, 00:14:56.148 "data_size": 63488 00:14:56.148 } 00:14:56.148 ] 00:14:56.148 }' 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.148 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.409 [2024-11-27 09:51:57.309261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.409 [2024-11-27 09:51:57.354026] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:56.409 [2024-11-27 09:51:57.354114] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.409 [2024-11-27 09:51:57.354136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.409 [2024-11-27 09:51:57.354146] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.409 "name": "raid_bdev1", 00:14:56.409 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:56.409 "strip_size_kb": 0, 00:14:56.409 "state": "online", 00:14:56.409 "raid_level": "raid1", 00:14:56.409 "superblock": true, 00:14:56.409 "num_base_bdevs": 2, 00:14:56.409 "num_base_bdevs_discovered": 1, 00:14:56.409 "num_base_bdevs_operational": 1, 00:14:56.409 "base_bdevs_list": [ 00:14:56.409 { 00:14:56.409 "name": null, 00:14:56.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.409 "is_configured": false, 00:14:56.409 "data_offset": 0, 00:14:56.409 "data_size": 63488 00:14:56.409 }, 00:14:56.409 { 00:14:56.409 "name": "BaseBdev2", 00:14:56.409 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:56.409 "is_configured": true, 00:14:56.409 "data_offset": 2048, 00:14:56.409 "data_size": 63488 00:14:56.409 } 00:14:56.409 ] 00:14:56.409 }' 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.409 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.986 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:56.986 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.986 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.986 [2024-11-27 09:51:57.835464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:56.986 [2024-11-27 09:51:57.835623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.986 [2024-11-27 09:51:57.835697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:56.986 [2024-11-27 09:51:57.835734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.986 [2024-11-27 09:51:57.836389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.986 [2024-11-27 09:51:57.836490] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:56.986 [2024-11-27 09:51:57.836669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:56.987 [2024-11-27 09:51:57.836722] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:56.987 [2024-11-27 09:51:57.836778] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:56.987 [2024-11-27 09:51:57.836841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:56.987 [2024-11-27 09:51:57.855505] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:56.987 spare 00:14:56.987 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.987 [2024-11-27 09:51:57.857809] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.987 09:51:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.923 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.923 "name": "raid_bdev1", 00:14:57.923 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:57.924 "strip_size_kb": 0, 00:14:57.924 "state": "online", 00:14:57.924 "raid_level": "raid1", 00:14:57.924 "superblock": true, 00:14:57.924 "num_base_bdevs": 2, 00:14:57.924 "num_base_bdevs_discovered": 2, 00:14:57.924 "num_base_bdevs_operational": 2, 00:14:57.924 "process": { 00:14:57.924 "type": "rebuild", 00:14:57.924 "target": "spare", 00:14:57.924 "progress": { 00:14:57.924 "blocks": 20480, 00:14:57.924 "percent": 32 00:14:57.924 } 00:14:57.924 }, 00:14:57.924 "base_bdevs_list": [ 00:14:57.924 { 00:14:57.924 "name": "spare", 00:14:57.924 "uuid": "3a7f5914-3336-5910-be18-f7bf86cd17f8", 00:14:57.924 "is_configured": true, 00:14:57.924 "data_offset": 2048, 00:14:57.924 "data_size": 63488 00:14:57.924 }, 00:14:57.924 { 00:14:57.924 "name": "BaseBdev2", 00:14:57.924 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:57.924 "is_configured": true, 00:14:57.924 "data_offset": 2048, 00:14:57.924 "data_size": 63488 00:14:57.924 } 00:14:57.924 ] 00:14:57.924 }' 00:14:57.924 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.924 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.924 09:51:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.924 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.924 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:57.924 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.924 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.924 [2024-11-27 09:51:59.017206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.184 [2024-11-27 09:51:59.067740] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:58.184 [2024-11-27 09:51:59.067931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.184 [2024-11-27 09:51:59.067977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:58.184 [2024-11-27 09:51:59.068024] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.184 "name": "raid_bdev1", 00:14:58.184 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:58.184 "strip_size_kb": 0, 00:14:58.184 "state": "online", 00:14:58.184 "raid_level": "raid1", 00:14:58.184 "superblock": true, 00:14:58.184 "num_base_bdevs": 2, 00:14:58.184 "num_base_bdevs_discovered": 1, 00:14:58.184 "num_base_bdevs_operational": 1, 00:14:58.184 "base_bdevs_list": [ 00:14:58.184 { 00:14:58.184 "name": null, 00:14:58.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.184 "is_configured": false, 00:14:58.184 "data_offset": 0, 00:14:58.184 "data_size": 63488 00:14:58.184 }, 00:14:58.184 { 00:14:58.184 "name": "BaseBdev2", 00:14:58.184 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:58.184 "is_configured": true, 00:14:58.184 "data_offset": 2048, 00:14:58.184 "data_size": 63488 00:14:58.184 } 00:14:58.184 ] 00:14:58.184 }' 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.184 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.444 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.704 "name": "raid_bdev1", 00:14:58.704 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:58.704 "strip_size_kb": 0, 00:14:58.704 "state": "online", 00:14:58.704 "raid_level": "raid1", 00:14:58.704 "superblock": true, 00:14:58.704 "num_base_bdevs": 2, 00:14:58.704 "num_base_bdevs_discovered": 1, 00:14:58.704 "num_base_bdevs_operational": 1, 00:14:58.704 "base_bdevs_list": [ 00:14:58.704 { 00:14:58.704 "name": null, 00:14:58.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.704 "is_configured": false, 00:14:58.704 "data_offset": 0, 00:14:58.704 "data_size": 63488 00:14:58.704 }, 00:14:58.704 { 00:14:58.704 "name": "BaseBdev2", 00:14:58.704 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:58.704 "is_configured": true, 00:14:58.704 "data_offset": 2048, 00:14:58.704 "data_size": 63488 00:14:58.704 } 00:14:58.704 ] 00:14:58.704 }' 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.704 [2024-11-27 09:51:59.697705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.704 [2024-11-27 09:51:59.697843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.704 [2024-11-27 09:51:59.697904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:58.704 [2024-11-27 09:51:59.697956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.704 [2024-11-27 09:51:59.698568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.704 [2024-11-27 09:51:59.698641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.704 [2024-11-27 09:51:59.698782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:58.704 [2024-11-27 09:51:59.698838] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:58.704 [2024-11-27 09:51:59.698884] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:58.704 [2024-11-27 09:51:59.698950] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:58.704 BaseBdev1 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.704 09:51:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.641 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.641 "name": "raid_bdev1", 00:14:59.641 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:14:59.641 "strip_size_kb": 0, 00:14:59.641 "state": "online", 00:14:59.641 "raid_level": "raid1", 00:14:59.641 "superblock": true, 00:14:59.641 "num_base_bdevs": 2, 00:14:59.641 "num_base_bdevs_discovered": 1, 00:14:59.641 "num_base_bdevs_operational": 1, 00:14:59.641 "base_bdevs_list": [ 00:14:59.641 { 00:14:59.641 "name": null, 00:14:59.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.641 "is_configured": false, 00:14:59.641 "data_offset": 0, 00:14:59.641 "data_size": 63488 00:14:59.641 }, 00:14:59.641 { 00:14:59.641 "name": "BaseBdev2", 00:14:59.641 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:14:59.641 "is_configured": true, 00:14:59.642 "data_offset": 2048, 00:14:59.642 "data_size": 63488 00:14:59.642 } 00:14:59.642 ] 00:14:59.642 }' 00:14:59.642 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.642 09:52:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.209 "name": "raid_bdev1", 00:15:00.209 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:15:00.209 "strip_size_kb": 0, 00:15:00.209 "state": "online", 00:15:00.209 "raid_level": "raid1", 00:15:00.209 "superblock": true, 00:15:00.209 "num_base_bdevs": 2, 00:15:00.209 "num_base_bdevs_discovered": 1, 00:15:00.209 "num_base_bdevs_operational": 1, 00:15:00.209 "base_bdevs_list": [ 00:15:00.209 { 00:15:00.209 "name": null, 00:15:00.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.209 "is_configured": false, 00:15:00.209 "data_offset": 0, 00:15:00.209 "data_size": 63488 00:15:00.209 }, 00:15:00.209 { 00:15:00.209 "name": "BaseBdev2", 00:15:00.209 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:15:00.209 "is_configured": true, 00:15:00.209 "data_offset": 2048, 00:15:00.209 "data_size": 63488 00:15:00.209 } 00:15:00.209 ] 00:15:00.209 }' 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.209 [2024-11-27 09:52:01.271682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.209 [2024-11-27 09:52:01.271981] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:00.209 [2024-11-27 09:52:01.272073] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:00.209 request: 00:15:00.209 { 00:15:00.209 "base_bdev": "BaseBdev1", 00:15:00.209 "raid_bdev": "raid_bdev1", 00:15:00.209 "method": "bdev_raid_add_base_bdev", 00:15:00.209 "req_id": 1 00:15:00.209 } 00:15:00.209 Got JSON-RPC error response 00:15:00.209 response: 00:15:00.209 { 00:15:00.209 "code": -22, 00:15:00.209 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:00.209 } 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:00.209 09:52:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.591 "name": "raid_bdev1", 00:15:01.591 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:15:01.591 "strip_size_kb": 0, 00:15:01.591 "state": "online", 00:15:01.591 "raid_level": "raid1", 00:15:01.591 "superblock": true, 00:15:01.591 "num_base_bdevs": 2, 00:15:01.591 "num_base_bdevs_discovered": 1, 00:15:01.591 "num_base_bdevs_operational": 1, 00:15:01.591 "base_bdevs_list": [ 00:15:01.591 { 00:15:01.591 "name": null, 00:15:01.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.591 "is_configured": false, 00:15:01.591 "data_offset": 0, 00:15:01.591 "data_size": 63488 00:15:01.591 }, 00:15:01.591 { 00:15:01.591 "name": "BaseBdev2", 00:15:01.591 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:15:01.591 "is_configured": true, 00:15:01.591 "data_offset": 2048, 00:15:01.591 "data_size": 63488 00:15:01.591 } 00:15:01.591 ] 00:15:01.591 }' 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.591 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.851 "name": "raid_bdev1", 00:15:01.851 "uuid": "4efbcdf5-21bb-46da-8ec5-c40e298fe24c", 00:15:01.851 "strip_size_kb": 0, 00:15:01.851 "state": "online", 00:15:01.851 "raid_level": "raid1", 00:15:01.851 "superblock": true, 00:15:01.851 "num_base_bdevs": 2, 00:15:01.851 "num_base_bdevs_discovered": 1, 00:15:01.851 "num_base_bdevs_operational": 1, 00:15:01.851 "base_bdevs_list": [ 00:15:01.851 { 00:15:01.851 "name": null, 00:15:01.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.851 "is_configured": false, 00:15:01.851 "data_offset": 0, 00:15:01.851 "data_size": 63488 00:15:01.851 }, 00:15:01.851 { 00:15:01.851 "name": "BaseBdev2", 00:15:01.851 "uuid": "c4fbedaa-976f-5696-8878-e923b2bfc7cb", 00:15:01.851 "is_configured": true, 00:15:01.851 "data_offset": 2048, 00:15:01.851 "data_size": 63488 00:15:01.851 } 00:15:01.851 ] 00:15:01.851 }' 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77133 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77133 ']' 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77133 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77133 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.851 killing process with pid 77133 00:15:01.851 Received shutdown signal, test time was about 16.770689 seconds 00:15:01.851 00:15:01.851 Latency(us) 00:15:01.851 [2024-11-27T09:52:02.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.851 [2024-11-27T09:52:02.984Z] =================================================================================================================== 00:15:01.851 [2024-11-27T09:52:02.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77133' 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77133 00:15:01.851 [2024-11-27 09:52:02.874883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.851 09:52:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77133 00:15:01.851 [2024-11-27 09:52:02.875071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.851 [2024-11-27 09:52:02.875144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.851 [2024-11-27 09:52:02.875156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:02.111 [2024-11-27 09:52:03.121958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.494 ************************************ 00:15:03.494 END TEST raid_rebuild_test_sb_io 00:15:03.494 ************************************ 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:03.494 00:15:03.494 real 0m20.065s 00:15:03.494 user 0m25.887s 00:15:03.494 sys 0m2.324s 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 09:52:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:03.494 09:52:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:03.494 09:52:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:03.494 09:52:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.494 09:52:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.494 ************************************ 00:15:03.494 START TEST raid_rebuild_test 00:15:03.494 ************************************ 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77819 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77819 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77819 ']' 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.494 09:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.495 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.495 Zero copy mechanism will not be used. 00:15:03.495 [2024-11-27 09:52:04.567146] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:15:03.495 [2024-11-27 09:52:04.567284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77819 ] 00:15:03.754 [2024-11-27 09:52:04.746402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.754 [2024-11-27 09:52:04.880707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.014 [2024-11-27 09:52:05.111221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.014 [2024-11-27 09:52:05.111275] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.273 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.273 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:04.273 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.273 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.273 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.273 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 BaseBdev1_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 [2024-11-27 09:52:05.447104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.533 [2024-11-27 09:52:05.447248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.533 [2024-11-27 09:52:05.447323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.533 [2024-11-27 09:52:05.447364] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.533 [2024-11-27 09:52:05.449870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.533 [2024-11-27 09:52:05.449979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.533 BaseBdev1 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 BaseBdev2_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 [2024-11-27 09:52:05.499873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:04.533 [2024-11-27 09:52:05.500012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.533 [2024-11-27 09:52:05.500081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:04.533 [2024-11-27 09:52:05.500122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.533 [2024-11-27 09:52:05.502512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.533 [2024-11-27 09:52:05.502595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.533 BaseBdev2 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 BaseBdev3_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 [2024-11-27 09:52:05.585605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:04.533 [2024-11-27 09:52:05.585738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.533 [2024-11-27 09:52:05.585801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:04.533 [2024-11-27 09:52:05.585844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.533 [2024-11-27 09:52:05.588254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.533 [2024-11-27 09:52:05.588358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:04.533 BaseBdev3 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 BaseBdev4_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.533 [2024-11-27 09:52:05.642966] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:04.533 [2024-11-27 09:52:05.643111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.533 [2024-11-27 09:52:05.643157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:04.533 [2024-11-27 09:52:05.643216] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.533 [2024-11-27 09:52:05.645607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.533 [2024-11-27 09:52:05.645695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:04.533 BaseBdev4 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.533 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 spare_malloc 00:15:04.792 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.792 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.792 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.792 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 spare_delay 00:15:04.792 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.793 [2024-11-27 09:52:05.716542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.793 [2024-11-27 09:52:05.716655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.793 [2024-11-27 09:52:05.716696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:04.793 [2024-11-27 09:52:05.716747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.793 [2024-11-27 09:52:05.719195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.793 [2024-11-27 09:52:05.719279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.793 spare 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.793 [2024-11-27 09:52:05.728573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.793 [2024-11-27 09:52:05.730792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.793 [2024-11-27 09:52:05.730909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.793 [2024-11-27 09:52:05.731028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:04.793 [2024-11-27 09:52:05.731161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:04.793 [2024-11-27 09:52:05.731216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:04.793 [2024-11-27 09:52:05.731532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:04.793 [2024-11-27 09:52:05.731795] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:04.793 [2024-11-27 09:52:05.731852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:04.793 [2024-11-27 09:52:05.732077] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.793 "name": "raid_bdev1", 00:15:04.793 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:04.793 "strip_size_kb": 0, 00:15:04.793 "state": "online", 00:15:04.793 "raid_level": "raid1", 00:15:04.793 "superblock": false, 00:15:04.793 "num_base_bdevs": 4, 00:15:04.793 "num_base_bdevs_discovered": 4, 00:15:04.793 "num_base_bdevs_operational": 4, 00:15:04.793 "base_bdevs_list": [ 00:15:04.793 { 00:15:04.793 "name": "BaseBdev1", 00:15:04.793 "uuid": "3d0970d2-cc55-51e5-83ec-449c2fd2fa98", 00:15:04.793 "is_configured": true, 00:15:04.793 "data_offset": 0, 00:15:04.793 "data_size": 65536 00:15:04.793 }, 00:15:04.793 { 00:15:04.793 "name": "BaseBdev2", 00:15:04.793 "uuid": "ccfc8d63-f8e9-5975-84dd-48aa133c95eb", 00:15:04.793 "is_configured": true, 00:15:04.793 "data_offset": 0, 00:15:04.793 "data_size": 65536 00:15:04.793 }, 00:15:04.793 { 00:15:04.793 "name": "BaseBdev3", 00:15:04.793 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:04.793 "is_configured": true, 00:15:04.793 "data_offset": 0, 00:15:04.793 "data_size": 65536 00:15:04.793 }, 00:15:04.793 { 00:15:04.793 "name": "BaseBdev4", 00:15:04.793 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:04.793 "is_configured": true, 00:15:04.793 "data_offset": 0, 00:15:04.793 "data_size": 65536 00:15:04.793 } 00:15:04.793 ] 00:15:04.793 }' 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.793 09:52:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.059 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:05.059 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.059 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.059 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.059 [2024-11-27 09:52:06.184557] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.337 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:05.596 [2024-11-27 09:52:06.471748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:05.596 /dev/nbd0 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.596 1+0 records in 00:15:05.596 1+0 records out 00:15:05.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450949 s, 9.1 MB/s 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:05.596 09:52:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:12.187 65536+0 records in 00:15:12.187 65536+0 records out 00:15:12.187 33554432 bytes (34 MB, 32 MiB) copied, 5.56103 s, 6.0 MB/s 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:12.187 [2024-11-27 09:52:12.315291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.187 [2024-11-27 09:52:12.332836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.187 "name": "raid_bdev1", 00:15:12.187 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:12.187 "strip_size_kb": 0, 00:15:12.187 "state": "online", 00:15:12.187 "raid_level": "raid1", 00:15:12.187 "superblock": false, 00:15:12.187 "num_base_bdevs": 4, 00:15:12.187 "num_base_bdevs_discovered": 3, 00:15:12.187 "num_base_bdevs_operational": 3, 00:15:12.187 "base_bdevs_list": [ 00:15:12.187 { 00:15:12.187 "name": null, 00:15:12.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.187 "is_configured": false, 00:15:12.187 "data_offset": 0, 00:15:12.187 "data_size": 65536 00:15:12.187 }, 00:15:12.187 { 00:15:12.187 "name": "BaseBdev2", 00:15:12.187 "uuid": "ccfc8d63-f8e9-5975-84dd-48aa133c95eb", 00:15:12.187 "is_configured": true, 00:15:12.187 "data_offset": 0, 00:15:12.187 "data_size": 65536 00:15:12.187 }, 00:15:12.187 { 00:15:12.187 "name": "BaseBdev3", 00:15:12.187 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:12.187 "is_configured": true, 00:15:12.187 "data_offset": 0, 00:15:12.187 "data_size": 65536 00:15:12.187 }, 00:15:12.187 { 00:15:12.187 "name": "BaseBdev4", 00:15:12.187 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:12.187 "is_configured": true, 00:15:12.187 "data_offset": 0, 00:15:12.187 "data_size": 65536 00:15:12.187 } 00:15:12.187 ] 00:15:12.187 }' 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.187 [2024-11-27 09:52:12.808181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.187 [2024-11-27 09:52:12.823473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.187 09:52:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:12.187 [2024-11-27 09:52:12.825738] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.756 "name": "raid_bdev1", 00:15:12.756 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:12.756 "strip_size_kb": 0, 00:15:12.756 "state": "online", 00:15:12.756 "raid_level": "raid1", 00:15:12.756 "superblock": false, 00:15:12.756 "num_base_bdevs": 4, 00:15:12.756 "num_base_bdevs_discovered": 4, 00:15:12.756 "num_base_bdevs_operational": 4, 00:15:12.756 "process": { 00:15:12.756 "type": "rebuild", 00:15:12.756 "target": "spare", 00:15:12.756 "progress": { 00:15:12.756 "blocks": 20480, 00:15:12.756 "percent": 31 00:15:12.756 } 00:15:12.756 }, 00:15:12.756 "base_bdevs_list": [ 00:15:12.756 { 00:15:12.756 "name": "spare", 00:15:12.756 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:12.756 "is_configured": true, 00:15:12.756 "data_offset": 0, 00:15:12.756 "data_size": 65536 00:15:12.756 }, 00:15:12.756 { 00:15:12.756 "name": "BaseBdev2", 00:15:12.756 "uuid": "ccfc8d63-f8e9-5975-84dd-48aa133c95eb", 00:15:12.756 "is_configured": true, 00:15:12.756 "data_offset": 0, 00:15:12.756 "data_size": 65536 00:15:12.756 }, 00:15:12.756 { 00:15:12.756 "name": "BaseBdev3", 00:15:12.756 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:12.756 "is_configured": true, 00:15:12.756 "data_offset": 0, 00:15:12.756 "data_size": 65536 00:15:12.756 }, 00:15:12.756 { 00:15:12.756 "name": "BaseBdev4", 00:15:12.756 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:12.756 "is_configured": true, 00:15:12.756 "data_offset": 0, 00:15:12.756 "data_size": 65536 00:15:12.756 } 00:15:12.756 ] 00:15:12.756 }' 00:15:12.756 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.015 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:13.015 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.015 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:13.015 09:52:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:13.015 09:52:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.015 09:52:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.015 [2024-11-27 09:52:13.985447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.015 [2024-11-27 09:52:14.035088] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:13.015 [2024-11-27 09:52:14.035239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.015 [2024-11-27 09:52:14.035284] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:13.015 [2024-11-27 09:52:14.035331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:13.015 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.016 "name": "raid_bdev1", 00:15:13.016 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:13.016 "strip_size_kb": 0, 00:15:13.016 "state": "online", 00:15:13.016 "raid_level": "raid1", 00:15:13.016 "superblock": false, 00:15:13.016 "num_base_bdevs": 4, 00:15:13.016 "num_base_bdevs_discovered": 3, 00:15:13.016 "num_base_bdevs_operational": 3, 00:15:13.016 "base_bdevs_list": [ 00:15:13.016 { 00:15:13.016 "name": null, 00:15:13.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.016 "is_configured": false, 00:15:13.016 "data_offset": 0, 00:15:13.016 "data_size": 65536 00:15:13.016 }, 00:15:13.016 { 00:15:13.016 "name": "BaseBdev2", 00:15:13.016 "uuid": "ccfc8d63-f8e9-5975-84dd-48aa133c95eb", 00:15:13.016 "is_configured": true, 00:15:13.016 "data_offset": 0, 00:15:13.016 "data_size": 65536 00:15:13.016 }, 00:15:13.016 { 00:15:13.016 "name": "BaseBdev3", 00:15:13.016 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:13.016 "is_configured": true, 00:15:13.016 "data_offset": 0, 00:15:13.016 "data_size": 65536 00:15:13.016 }, 00:15:13.016 { 00:15:13.016 "name": "BaseBdev4", 00:15:13.016 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:13.016 "is_configured": true, 00:15:13.016 "data_offset": 0, 00:15:13.016 "data_size": 65536 00:15:13.016 } 00:15:13.016 ] 00:15:13.016 }' 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.016 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.583 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.584 "name": "raid_bdev1", 00:15:13.584 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:13.584 "strip_size_kb": 0, 00:15:13.584 "state": "online", 00:15:13.584 "raid_level": "raid1", 00:15:13.584 "superblock": false, 00:15:13.584 "num_base_bdevs": 4, 00:15:13.584 "num_base_bdevs_discovered": 3, 00:15:13.584 "num_base_bdevs_operational": 3, 00:15:13.584 "base_bdevs_list": [ 00:15:13.584 { 00:15:13.584 "name": null, 00:15:13.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.584 "is_configured": false, 00:15:13.584 "data_offset": 0, 00:15:13.584 "data_size": 65536 00:15:13.584 }, 00:15:13.584 { 00:15:13.584 "name": "BaseBdev2", 00:15:13.584 "uuid": "ccfc8d63-f8e9-5975-84dd-48aa133c95eb", 00:15:13.584 "is_configured": true, 00:15:13.584 "data_offset": 0, 00:15:13.584 "data_size": 65536 00:15:13.584 }, 00:15:13.584 { 00:15:13.584 "name": "BaseBdev3", 00:15:13.584 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:13.584 "is_configured": true, 00:15:13.584 "data_offset": 0, 00:15:13.584 "data_size": 65536 00:15:13.584 }, 00:15:13.584 { 00:15:13.584 "name": "BaseBdev4", 00:15:13.584 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:13.584 "is_configured": true, 00:15:13.584 "data_offset": 0, 00:15:13.584 "data_size": 65536 00:15:13.584 } 00:15:13.584 ] 00:15:13.584 }' 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.584 [2024-11-27 09:52:14.602512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.584 [2024-11-27 09:52:14.617621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.584 09:52:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:13.584 [2024-11-27 09:52:14.619827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.522 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.783 "name": "raid_bdev1", 00:15:14.783 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:14.783 "strip_size_kb": 0, 00:15:14.783 "state": "online", 00:15:14.783 "raid_level": "raid1", 00:15:14.783 "superblock": false, 00:15:14.783 "num_base_bdevs": 4, 00:15:14.783 "num_base_bdevs_discovered": 4, 00:15:14.783 "num_base_bdevs_operational": 4, 00:15:14.783 "process": { 00:15:14.783 "type": "rebuild", 00:15:14.783 "target": "spare", 00:15:14.783 "progress": { 00:15:14.783 "blocks": 20480, 00:15:14.783 "percent": 31 00:15:14.783 } 00:15:14.783 }, 00:15:14.783 "base_bdevs_list": [ 00:15:14.783 { 00:15:14.783 "name": "spare", 00:15:14.783 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 }, 00:15:14.783 { 00:15:14.783 "name": "BaseBdev2", 00:15:14.783 "uuid": "ccfc8d63-f8e9-5975-84dd-48aa133c95eb", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 }, 00:15:14.783 { 00:15:14.783 "name": "BaseBdev3", 00:15:14.783 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 }, 00:15:14.783 { 00:15:14.783 "name": "BaseBdev4", 00:15:14.783 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 } 00:15:14.783 ] 00:15:14.783 }' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.783 [2024-11-27 09:52:15.775307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.783 [2024-11-27 09:52:15.828961] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.783 "name": "raid_bdev1", 00:15:14.783 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:14.783 "strip_size_kb": 0, 00:15:14.783 "state": "online", 00:15:14.783 "raid_level": "raid1", 00:15:14.783 "superblock": false, 00:15:14.783 "num_base_bdevs": 4, 00:15:14.783 "num_base_bdevs_discovered": 3, 00:15:14.783 "num_base_bdevs_operational": 3, 00:15:14.783 "process": { 00:15:14.783 "type": "rebuild", 00:15:14.783 "target": "spare", 00:15:14.783 "progress": { 00:15:14.783 "blocks": 24576, 00:15:14.783 "percent": 37 00:15:14.783 } 00:15:14.783 }, 00:15:14.783 "base_bdevs_list": [ 00:15:14.783 { 00:15:14.783 "name": "spare", 00:15:14.783 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 }, 00:15:14.783 { 00:15:14.783 "name": null, 00:15:14.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.783 "is_configured": false, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 }, 00:15:14.783 { 00:15:14.783 "name": "BaseBdev3", 00:15:14.783 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 }, 00:15:14.783 { 00:15:14.783 "name": "BaseBdev4", 00:15:14.783 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:14.783 "is_configured": true, 00:15:14.783 "data_offset": 0, 00:15:14.783 "data_size": 65536 00:15:14.783 } 00:15:14.783 ] 00:15:14.783 }' 00:15:14.783 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=450 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.044 09:52:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.044 09:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.044 "name": "raid_bdev1", 00:15:15.044 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:15.044 "strip_size_kb": 0, 00:15:15.044 "state": "online", 00:15:15.044 "raid_level": "raid1", 00:15:15.044 "superblock": false, 00:15:15.044 "num_base_bdevs": 4, 00:15:15.044 "num_base_bdevs_discovered": 3, 00:15:15.044 "num_base_bdevs_operational": 3, 00:15:15.044 "process": { 00:15:15.044 "type": "rebuild", 00:15:15.044 "target": "spare", 00:15:15.044 "progress": { 00:15:15.044 "blocks": 26624, 00:15:15.044 "percent": 40 00:15:15.044 } 00:15:15.044 }, 00:15:15.044 "base_bdevs_list": [ 00:15:15.044 { 00:15:15.044 "name": "spare", 00:15:15.044 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:15.044 "is_configured": true, 00:15:15.044 "data_offset": 0, 00:15:15.044 "data_size": 65536 00:15:15.044 }, 00:15:15.044 { 00:15:15.044 "name": null, 00:15:15.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.044 "is_configured": false, 00:15:15.044 "data_offset": 0, 00:15:15.044 "data_size": 65536 00:15:15.044 }, 00:15:15.044 { 00:15:15.044 "name": "BaseBdev3", 00:15:15.044 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:15.044 "is_configured": true, 00:15:15.044 "data_offset": 0, 00:15:15.044 "data_size": 65536 00:15:15.044 }, 00:15:15.044 { 00:15:15.044 "name": "BaseBdev4", 00:15:15.044 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:15.044 "is_configured": true, 00:15:15.044 "data_offset": 0, 00:15:15.044 "data_size": 65536 00:15:15.044 } 00:15:15.044 ] 00:15:15.044 }' 00:15:15.044 09:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.044 09:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.044 09:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.044 09:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.044 09:52:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.427 "name": "raid_bdev1", 00:15:16.427 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:16.427 "strip_size_kb": 0, 00:15:16.427 "state": "online", 00:15:16.427 "raid_level": "raid1", 00:15:16.427 "superblock": false, 00:15:16.427 "num_base_bdevs": 4, 00:15:16.427 "num_base_bdevs_discovered": 3, 00:15:16.427 "num_base_bdevs_operational": 3, 00:15:16.427 "process": { 00:15:16.427 "type": "rebuild", 00:15:16.427 "target": "spare", 00:15:16.427 "progress": { 00:15:16.427 "blocks": 49152, 00:15:16.427 "percent": 75 00:15:16.427 } 00:15:16.427 }, 00:15:16.427 "base_bdevs_list": [ 00:15:16.427 { 00:15:16.427 "name": "spare", 00:15:16.427 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:16.427 "is_configured": true, 00:15:16.427 "data_offset": 0, 00:15:16.427 "data_size": 65536 00:15:16.427 }, 00:15:16.427 { 00:15:16.427 "name": null, 00:15:16.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.427 "is_configured": false, 00:15:16.427 "data_offset": 0, 00:15:16.427 "data_size": 65536 00:15:16.427 }, 00:15:16.427 { 00:15:16.427 "name": "BaseBdev3", 00:15:16.427 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:16.427 "is_configured": true, 00:15:16.427 "data_offset": 0, 00:15:16.427 "data_size": 65536 00:15:16.427 }, 00:15:16.427 { 00:15:16.427 "name": "BaseBdev4", 00:15:16.427 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:16.427 "is_configured": true, 00:15:16.427 "data_offset": 0, 00:15:16.427 "data_size": 65536 00:15:16.427 } 00:15:16.427 ] 00:15:16.427 }' 00:15:16.427 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.428 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.428 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.428 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.428 09:52:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:16.998 [2024-11-27 09:52:17.844198] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:16.998 [2024-11-27 09:52:17.844399] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:16.998 [2024-11-27 09:52:17.844514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.258 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.258 "name": "raid_bdev1", 00:15:17.258 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:17.258 "strip_size_kb": 0, 00:15:17.258 "state": "online", 00:15:17.258 "raid_level": "raid1", 00:15:17.258 "superblock": false, 00:15:17.258 "num_base_bdevs": 4, 00:15:17.258 "num_base_bdevs_discovered": 3, 00:15:17.258 "num_base_bdevs_operational": 3, 00:15:17.258 "base_bdevs_list": [ 00:15:17.258 { 00:15:17.258 "name": "spare", 00:15:17.258 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:17.258 "is_configured": true, 00:15:17.258 "data_offset": 0, 00:15:17.258 "data_size": 65536 00:15:17.258 }, 00:15:17.258 { 00:15:17.258 "name": null, 00:15:17.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.258 "is_configured": false, 00:15:17.258 "data_offset": 0, 00:15:17.258 "data_size": 65536 00:15:17.258 }, 00:15:17.258 { 00:15:17.258 "name": "BaseBdev3", 00:15:17.259 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 65536 00:15:17.259 }, 00:15:17.259 { 00:15:17.259 "name": "BaseBdev4", 00:15:17.259 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:17.259 "is_configured": true, 00:15:17.259 "data_offset": 0, 00:15:17.259 "data_size": 65536 00:15:17.259 } 00:15:17.259 ] 00:15:17.259 }' 00:15:17.259 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.259 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:17.259 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.519 "name": "raid_bdev1", 00:15:17.519 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:17.519 "strip_size_kb": 0, 00:15:17.519 "state": "online", 00:15:17.519 "raid_level": "raid1", 00:15:17.519 "superblock": false, 00:15:17.519 "num_base_bdevs": 4, 00:15:17.519 "num_base_bdevs_discovered": 3, 00:15:17.519 "num_base_bdevs_operational": 3, 00:15:17.519 "base_bdevs_list": [ 00:15:17.519 { 00:15:17.519 "name": "spare", 00:15:17.519 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:17.519 "is_configured": true, 00:15:17.519 "data_offset": 0, 00:15:17.519 "data_size": 65536 00:15:17.519 }, 00:15:17.519 { 00:15:17.519 "name": null, 00:15:17.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.519 "is_configured": false, 00:15:17.519 "data_offset": 0, 00:15:17.519 "data_size": 65536 00:15:17.519 }, 00:15:17.519 { 00:15:17.519 "name": "BaseBdev3", 00:15:17.519 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:17.519 "is_configured": true, 00:15:17.519 "data_offset": 0, 00:15:17.519 "data_size": 65536 00:15:17.519 }, 00:15:17.519 { 00:15:17.519 "name": "BaseBdev4", 00:15:17.519 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:17.519 "is_configured": true, 00:15:17.519 "data_offset": 0, 00:15:17.519 "data_size": 65536 00:15:17.519 } 00:15:17.519 ] 00:15:17.519 }' 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.519 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.520 "name": "raid_bdev1", 00:15:17.520 "uuid": "2abc9167-4dbb-4de3-a03d-2899c9b74c33", 00:15:17.520 "strip_size_kb": 0, 00:15:17.520 "state": "online", 00:15:17.520 "raid_level": "raid1", 00:15:17.520 "superblock": false, 00:15:17.520 "num_base_bdevs": 4, 00:15:17.520 "num_base_bdevs_discovered": 3, 00:15:17.520 "num_base_bdevs_operational": 3, 00:15:17.520 "base_bdevs_list": [ 00:15:17.520 { 00:15:17.520 "name": "spare", 00:15:17.520 "uuid": "7398967f-81f4-562e-a8cc-89f25ff76ef6", 00:15:17.520 "is_configured": true, 00:15:17.520 "data_offset": 0, 00:15:17.520 "data_size": 65536 00:15:17.520 }, 00:15:17.520 { 00:15:17.520 "name": null, 00:15:17.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.520 "is_configured": false, 00:15:17.520 "data_offset": 0, 00:15:17.520 "data_size": 65536 00:15:17.520 }, 00:15:17.520 { 00:15:17.520 "name": "BaseBdev3", 00:15:17.520 "uuid": "f60f59e6-4b5a-584c-8407-33d2c35f3205", 00:15:17.520 "is_configured": true, 00:15:17.520 "data_offset": 0, 00:15:17.520 "data_size": 65536 00:15:17.520 }, 00:15:17.520 { 00:15:17.520 "name": "BaseBdev4", 00:15:17.520 "uuid": "be79e48b-0feb-56ac-a3d0-102fb6d11aea", 00:15:17.520 "is_configured": true, 00:15:17.520 "data_offset": 0, 00:15:17.520 "data_size": 65536 00:15:17.520 } 00:15:17.520 ] 00:15:17.520 }' 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.520 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.090 [2024-11-27 09:52:18.954411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.090 [2024-11-27 09:52:18.954507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.090 [2024-11-27 09:52:18.954643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.090 [2024-11-27 09:52:18.954778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.090 [2024-11-27 09:52:18.954794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.090 09:52:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:18.090 /dev/nbd0 00:15:18.090 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.350 1+0 records in 00:15:18.350 1+0 records out 00:15:18.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436971 s, 9.4 MB/s 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:18.350 /dev/nbd1 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:18.350 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:18.350 1+0 records in 00:15:18.350 1+0 records out 00:15:18.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471358 s, 8.7 MB/s 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.610 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:18.870 09:52:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77819 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77819 ']' 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77819 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77819 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77819' 00:15:19.131 killing process with pid 77819 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77819 00:15:19.131 Received shutdown signal, test time was about 60.000000 seconds 00:15:19.131 00:15:19.131 Latency(us) 00:15:19.131 [2024-11-27T09:52:20.264Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.131 [2024-11-27T09:52:20.264Z] =================================================================================================================== 00:15:19.131 [2024-11-27T09:52:20.264Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.131 [2024-11-27 09:52:20.157581] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.131 09:52:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77819 00:15:19.702 [2024-11-27 09:52:20.672077] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.085 09:52:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:21.085 00:15:21.085 real 0m17.409s 00:15:21.085 user 0m19.475s 00:15:21.085 sys 0m3.197s 00:15:21.085 ************************************ 00:15:21.085 END TEST raid_rebuild_test 00:15:21.085 ************************************ 00:15:21.085 09:52:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.085 09:52:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 09:52:21 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:21.085 09:52:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:21.085 09:52:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.085 09:52:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 ************************************ 00:15:21.086 START TEST raid_rebuild_test_sb 00:15:21.086 ************************************ 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78260 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78260 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78260 ']' 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.086 09:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.086 [2024-11-27 09:52:22.065132] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:15:21.086 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:21.086 Zero copy mechanism will not be used. 00:15:21.086 [2024-11-27 09:52:22.065355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78260 ] 00:15:21.346 [2024-11-27 09:52:22.246538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.347 [2024-11-27 09:52:22.381591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.607 [2024-11-27 09:52:22.616430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.607 [2024-11-27 09:52:22.616503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.867 BaseBdev1_malloc 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.867 [2024-11-27 09:52:22.930755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:21.867 [2024-11-27 09:52:22.930908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.867 [2024-11-27 09:52:22.930958] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:21.867 [2024-11-27 09:52:22.931009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.867 [2024-11-27 09:52:22.933510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.867 [2024-11-27 09:52:22.933600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:21.867 BaseBdev1 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.867 BaseBdev2_malloc 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.867 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.867 [2024-11-27 09:52:22.992152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:21.867 [2024-11-27 09:52:22.992290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.867 [2024-11-27 09:52:22.992341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:21.867 [2024-11-27 09:52:22.992384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.867 [2024-11-27 09:52:22.994953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.867 [2024-11-27 09:52:22.995053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:22.127 BaseBdev2 00:15:22.127 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.127 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.127 09:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:22.127 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.127 09:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.127 BaseBdev3_malloc 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.127 [2024-11-27 09:52:23.084702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:22.127 [2024-11-27 09:52:23.084828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.127 [2024-11-27 09:52:23.084895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:22.127 [2024-11-27 09:52:23.084940] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.127 [2024-11-27 09:52:23.087426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.127 [2024-11-27 09:52:23.087529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:22.127 BaseBdev3 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.127 BaseBdev4_malloc 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.127 [2024-11-27 09:52:23.148482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:22.127 [2024-11-27 09:52:23.148619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.127 [2024-11-27 09:52:23.148697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:22.127 [2024-11-27 09:52:23.148741] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.127 [2024-11-27 09:52:23.151283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.127 [2024-11-27 09:52:23.151370] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:22.127 BaseBdev4 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.127 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.127 spare_malloc 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.128 spare_delay 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.128 [2024-11-27 09:52:23.222478] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.128 [2024-11-27 09:52:23.222607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.128 [2024-11-27 09:52:23.222653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:22.128 [2024-11-27 09:52:23.222698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.128 [2024-11-27 09:52:23.225281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.128 [2024-11-27 09:52:23.225374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.128 spare 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.128 [2024-11-27 09:52:23.234516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.128 [2024-11-27 09:52:23.236794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.128 [2024-11-27 09:52:23.236922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.128 [2024-11-27 09:52:23.237028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:22.128 [2024-11-27 09:52:23.237282] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:22.128 [2024-11-27 09:52:23.237345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:22.128 [2024-11-27 09:52:23.237643] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:22.128 [2024-11-27 09:52:23.237869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:22.128 [2024-11-27 09:52:23.237882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:22.128 [2024-11-27 09:52:23.238064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.128 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.388 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.388 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.388 "name": "raid_bdev1", 00:15:22.388 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:22.388 "strip_size_kb": 0, 00:15:22.388 "state": "online", 00:15:22.388 "raid_level": "raid1", 00:15:22.388 "superblock": true, 00:15:22.388 "num_base_bdevs": 4, 00:15:22.388 "num_base_bdevs_discovered": 4, 00:15:22.388 "num_base_bdevs_operational": 4, 00:15:22.388 "base_bdevs_list": [ 00:15:22.388 { 00:15:22.388 "name": "BaseBdev1", 00:15:22.388 "uuid": "4077f151-8b3e-5f22-96f7-e65dea9f4a15", 00:15:22.388 "is_configured": true, 00:15:22.388 "data_offset": 2048, 00:15:22.388 "data_size": 63488 00:15:22.388 }, 00:15:22.388 { 00:15:22.388 "name": "BaseBdev2", 00:15:22.388 "uuid": "27616b6f-7e68-5563-aca6-a0889d8126e6", 00:15:22.388 "is_configured": true, 00:15:22.388 "data_offset": 2048, 00:15:22.388 "data_size": 63488 00:15:22.388 }, 00:15:22.388 { 00:15:22.388 "name": "BaseBdev3", 00:15:22.388 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:22.388 "is_configured": true, 00:15:22.388 "data_offset": 2048, 00:15:22.388 "data_size": 63488 00:15:22.388 }, 00:15:22.388 { 00:15:22.388 "name": "BaseBdev4", 00:15:22.388 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:22.388 "is_configured": true, 00:15:22.388 "data_offset": 2048, 00:15:22.388 "data_size": 63488 00:15:22.388 } 00:15:22.388 ] 00:15:22.388 }' 00:15:22.388 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.388 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.648 [2024-11-27 09:52:23.654219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.648 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:22.907 [2024-11-27 09:52:23.925495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:22.907 /dev/nbd0 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:22.907 1+0 records in 00:15:22.907 1+0 records out 00:15:22.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583253 s, 7.0 MB/s 00:15:22.907 09:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:22.907 09:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:29.502 63488+0 records in 00:15:29.502 63488+0 records out 00:15:29.502 32505856 bytes (33 MB, 31 MiB) copied, 5.90071 s, 5.5 MB/s 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:29.502 09:52:29 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:29.502 [2024-11-27 09:52:30.107990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.502 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 [2024-11-27 09:52:30.144055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.503 "name": "raid_bdev1", 00:15:29.503 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:29.503 "strip_size_kb": 0, 00:15:29.503 "state": "online", 00:15:29.503 "raid_level": "raid1", 00:15:29.503 "superblock": true, 00:15:29.503 "num_base_bdevs": 4, 00:15:29.503 "num_base_bdevs_discovered": 3, 00:15:29.503 "num_base_bdevs_operational": 3, 00:15:29.503 "base_bdevs_list": [ 00:15:29.503 { 00:15:29.503 "name": null, 00:15:29.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.503 "is_configured": false, 00:15:29.503 "data_offset": 0, 00:15:29.503 "data_size": 63488 00:15:29.503 }, 00:15:29.503 { 00:15:29.503 "name": "BaseBdev2", 00:15:29.503 "uuid": "27616b6f-7e68-5563-aca6-a0889d8126e6", 00:15:29.503 "is_configured": true, 00:15:29.503 "data_offset": 2048, 00:15:29.503 "data_size": 63488 00:15:29.503 }, 00:15:29.503 { 00:15:29.503 "name": "BaseBdev3", 00:15:29.503 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:29.503 "is_configured": true, 00:15:29.503 "data_offset": 2048, 00:15:29.503 "data_size": 63488 00:15:29.503 }, 00:15:29.503 { 00:15:29.503 "name": "BaseBdev4", 00:15:29.503 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:29.503 "is_configured": true, 00:15:29.503 "data_offset": 2048, 00:15:29.503 "data_size": 63488 00:15:29.503 } 00:15:29.503 ] 00:15:29.503 }' 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.503 [2024-11-27 09:52:30.587244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.503 [2024-11-27 09:52:30.602627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.503 09:52:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:29.503 [2024-11-27 09:52:30.604920] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.884 "name": "raid_bdev1", 00:15:30.884 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:30.884 "strip_size_kb": 0, 00:15:30.884 "state": "online", 00:15:30.884 "raid_level": "raid1", 00:15:30.884 "superblock": true, 00:15:30.884 "num_base_bdevs": 4, 00:15:30.884 "num_base_bdevs_discovered": 4, 00:15:30.884 "num_base_bdevs_operational": 4, 00:15:30.884 "process": { 00:15:30.884 "type": "rebuild", 00:15:30.884 "target": "spare", 00:15:30.884 "progress": { 00:15:30.884 "blocks": 20480, 00:15:30.884 "percent": 32 00:15:30.884 } 00:15:30.884 }, 00:15:30.884 "base_bdevs_list": [ 00:15:30.884 { 00:15:30.884 "name": "spare", 00:15:30.884 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 }, 00:15:30.884 { 00:15:30.884 "name": "BaseBdev2", 00:15:30.884 "uuid": "27616b6f-7e68-5563-aca6-a0889d8126e6", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 }, 00:15:30.884 { 00:15:30.884 "name": "BaseBdev3", 00:15:30.884 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 }, 00:15:30.884 { 00:15:30.884 "name": "BaseBdev4", 00:15:30.884 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 } 00:15:30.884 ] 00:15:30.884 }' 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.884 [2024-11-27 09:52:31.764615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.884 [2024-11-27 09:52:31.814121] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.884 [2024-11-27 09:52:31.814275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.884 [2024-11-27 09:52:31.814325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.884 [2024-11-27 09:52:31.814371] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.884 "name": "raid_bdev1", 00:15:30.884 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:30.884 "strip_size_kb": 0, 00:15:30.884 "state": "online", 00:15:30.884 "raid_level": "raid1", 00:15:30.884 "superblock": true, 00:15:30.884 "num_base_bdevs": 4, 00:15:30.884 "num_base_bdevs_discovered": 3, 00:15:30.884 "num_base_bdevs_operational": 3, 00:15:30.884 "base_bdevs_list": [ 00:15:30.884 { 00:15:30.884 "name": null, 00:15:30.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.884 "is_configured": false, 00:15:30.884 "data_offset": 0, 00:15:30.884 "data_size": 63488 00:15:30.884 }, 00:15:30.884 { 00:15:30.884 "name": "BaseBdev2", 00:15:30.884 "uuid": "27616b6f-7e68-5563-aca6-a0889d8126e6", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 }, 00:15:30.884 { 00:15:30.884 "name": "BaseBdev3", 00:15:30.884 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 }, 00:15:30.884 { 00:15:30.884 "name": "BaseBdev4", 00:15:30.884 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:30.884 "is_configured": true, 00:15:30.884 "data_offset": 2048, 00:15:30.884 "data_size": 63488 00:15:30.884 } 00:15:30.884 ] 00:15:30.884 }' 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.884 09:52:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.453 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.453 "name": "raid_bdev1", 00:15:31.453 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:31.453 "strip_size_kb": 0, 00:15:31.453 "state": "online", 00:15:31.453 "raid_level": "raid1", 00:15:31.453 "superblock": true, 00:15:31.453 "num_base_bdevs": 4, 00:15:31.453 "num_base_bdevs_discovered": 3, 00:15:31.453 "num_base_bdevs_operational": 3, 00:15:31.453 "base_bdevs_list": [ 00:15:31.453 { 00:15:31.453 "name": null, 00:15:31.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.453 "is_configured": false, 00:15:31.453 "data_offset": 0, 00:15:31.453 "data_size": 63488 00:15:31.453 }, 00:15:31.453 { 00:15:31.453 "name": "BaseBdev2", 00:15:31.453 "uuid": "27616b6f-7e68-5563-aca6-a0889d8126e6", 00:15:31.453 "is_configured": true, 00:15:31.453 "data_offset": 2048, 00:15:31.453 "data_size": 63488 00:15:31.454 }, 00:15:31.454 { 00:15:31.454 "name": "BaseBdev3", 00:15:31.454 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:31.454 "is_configured": true, 00:15:31.454 "data_offset": 2048, 00:15:31.454 "data_size": 63488 00:15:31.454 }, 00:15:31.454 { 00:15:31.454 "name": "BaseBdev4", 00:15:31.454 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:31.454 "is_configured": true, 00:15:31.454 "data_offset": 2048, 00:15:31.454 "data_size": 63488 00:15:31.454 } 00:15:31.454 ] 00:15:31.454 }' 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.454 [2024-11-27 09:52:32.420720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.454 [2024-11-27 09:52:32.435377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.454 09:52:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:31.454 [2024-11-27 09:52:32.437709] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.392 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.393 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.393 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.393 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.393 "name": "raid_bdev1", 00:15:32.393 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:32.393 "strip_size_kb": 0, 00:15:32.393 "state": "online", 00:15:32.393 "raid_level": "raid1", 00:15:32.393 "superblock": true, 00:15:32.393 "num_base_bdevs": 4, 00:15:32.393 "num_base_bdevs_discovered": 4, 00:15:32.393 "num_base_bdevs_operational": 4, 00:15:32.393 "process": { 00:15:32.393 "type": "rebuild", 00:15:32.393 "target": "spare", 00:15:32.393 "progress": { 00:15:32.393 "blocks": 20480, 00:15:32.393 "percent": 32 00:15:32.393 } 00:15:32.393 }, 00:15:32.393 "base_bdevs_list": [ 00:15:32.393 { 00:15:32.393 "name": "spare", 00:15:32.393 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:32.393 "is_configured": true, 00:15:32.393 "data_offset": 2048, 00:15:32.393 "data_size": 63488 00:15:32.393 }, 00:15:32.393 { 00:15:32.393 "name": "BaseBdev2", 00:15:32.393 "uuid": "27616b6f-7e68-5563-aca6-a0889d8126e6", 00:15:32.393 "is_configured": true, 00:15:32.393 "data_offset": 2048, 00:15:32.393 "data_size": 63488 00:15:32.393 }, 00:15:32.393 { 00:15:32.393 "name": "BaseBdev3", 00:15:32.393 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:32.393 "is_configured": true, 00:15:32.393 "data_offset": 2048, 00:15:32.393 "data_size": 63488 00:15:32.393 }, 00:15:32.393 { 00:15:32.393 "name": "BaseBdev4", 00:15:32.393 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:32.393 "is_configured": true, 00:15:32.393 "data_offset": 2048, 00:15:32.393 "data_size": 63488 00:15:32.393 } 00:15:32.393 ] 00:15:32.393 }' 00:15:32.393 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:32.653 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.653 [2024-11-27 09:52:33.605265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:32.653 [2024-11-27 09:52:33.746873] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.653 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.914 "name": "raid_bdev1", 00:15:32.914 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:32.914 "strip_size_kb": 0, 00:15:32.914 "state": "online", 00:15:32.914 "raid_level": "raid1", 00:15:32.914 "superblock": true, 00:15:32.914 "num_base_bdevs": 4, 00:15:32.914 "num_base_bdevs_discovered": 3, 00:15:32.914 "num_base_bdevs_operational": 3, 00:15:32.914 "process": { 00:15:32.914 "type": "rebuild", 00:15:32.914 "target": "spare", 00:15:32.914 "progress": { 00:15:32.914 "blocks": 24576, 00:15:32.914 "percent": 38 00:15:32.914 } 00:15:32.914 }, 00:15:32.914 "base_bdevs_list": [ 00:15:32.914 { 00:15:32.914 "name": "spare", 00:15:32.914 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:32.914 "is_configured": true, 00:15:32.914 "data_offset": 2048, 00:15:32.914 "data_size": 63488 00:15:32.914 }, 00:15:32.914 { 00:15:32.914 "name": null, 00:15:32.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.914 "is_configured": false, 00:15:32.914 "data_offset": 0, 00:15:32.914 "data_size": 63488 00:15:32.914 }, 00:15:32.914 { 00:15:32.914 "name": "BaseBdev3", 00:15:32.914 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:32.914 "is_configured": true, 00:15:32.914 "data_offset": 2048, 00:15:32.914 "data_size": 63488 00:15:32.914 }, 00:15:32.914 { 00:15:32.914 "name": "BaseBdev4", 00:15:32.914 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:32.914 "is_configured": true, 00:15:32.914 "data_offset": 2048, 00:15:32.914 "data_size": 63488 00:15:32.914 } 00:15:32.914 ] 00:15:32.914 }' 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.914 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.914 "name": "raid_bdev1", 00:15:32.914 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:32.914 "strip_size_kb": 0, 00:15:32.914 "state": "online", 00:15:32.914 "raid_level": "raid1", 00:15:32.914 "superblock": true, 00:15:32.914 "num_base_bdevs": 4, 00:15:32.914 "num_base_bdevs_discovered": 3, 00:15:32.914 "num_base_bdevs_operational": 3, 00:15:32.914 "process": { 00:15:32.914 "type": "rebuild", 00:15:32.914 "target": "spare", 00:15:32.914 "progress": { 00:15:32.914 "blocks": 26624, 00:15:32.914 "percent": 41 00:15:32.914 } 00:15:32.914 }, 00:15:32.914 "base_bdevs_list": [ 00:15:32.914 { 00:15:32.914 "name": "spare", 00:15:32.914 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:32.914 "is_configured": true, 00:15:32.914 "data_offset": 2048, 00:15:32.914 "data_size": 63488 00:15:32.914 }, 00:15:32.914 { 00:15:32.914 "name": null, 00:15:32.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.914 "is_configured": false, 00:15:32.914 "data_offset": 0, 00:15:32.914 "data_size": 63488 00:15:32.914 }, 00:15:32.914 { 00:15:32.914 "name": "BaseBdev3", 00:15:32.914 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:32.914 "is_configured": true, 00:15:32.914 "data_offset": 2048, 00:15:32.914 "data_size": 63488 00:15:32.914 }, 00:15:32.914 { 00:15:32.914 "name": "BaseBdev4", 00:15:32.914 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:32.914 "is_configured": true, 00:15:32.914 "data_offset": 2048, 00:15:32.915 "data_size": 63488 00:15:32.915 } 00:15:32.915 ] 00:15:32.915 }' 00:15:32.915 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.915 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.915 09:52:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.915 09:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.915 09:52:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.296 "name": "raid_bdev1", 00:15:34.296 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:34.296 "strip_size_kb": 0, 00:15:34.296 "state": "online", 00:15:34.296 "raid_level": "raid1", 00:15:34.296 "superblock": true, 00:15:34.296 "num_base_bdevs": 4, 00:15:34.296 "num_base_bdevs_discovered": 3, 00:15:34.296 "num_base_bdevs_operational": 3, 00:15:34.296 "process": { 00:15:34.296 "type": "rebuild", 00:15:34.296 "target": "spare", 00:15:34.296 "progress": { 00:15:34.296 "blocks": 49152, 00:15:34.296 "percent": 77 00:15:34.296 } 00:15:34.296 }, 00:15:34.296 "base_bdevs_list": [ 00:15:34.296 { 00:15:34.296 "name": "spare", 00:15:34.296 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:34.296 "is_configured": true, 00:15:34.296 "data_offset": 2048, 00:15:34.296 "data_size": 63488 00:15:34.296 }, 00:15:34.296 { 00:15:34.296 "name": null, 00:15:34.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.296 "is_configured": false, 00:15:34.296 "data_offset": 0, 00:15:34.296 "data_size": 63488 00:15:34.296 }, 00:15:34.296 { 00:15:34.296 "name": "BaseBdev3", 00:15:34.296 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:34.296 "is_configured": true, 00:15:34.296 "data_offset": 2048, 00:15:34.296 "data_size": 63488 00:15:34.296 }, 00:15:34.296 { 00:15:34.296 "name": "BaseBdev4", 00:15:34.296 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:34.296 "is_configured": true, 00:15:34.296 "data_offset": 2048, 00:15:34.296 "data_size": 63488 00:15:34.296 } 00:15:34.296 ] 00:15:34.296 }' 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:34.296 09:52:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.555 [2024-11-27 09:52:35.661568] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:34.555 [2024-11-27 09:52:35.661716] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:34.555 [2024-11-27 09:52:35.661905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.122 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.122 "name": "raid_bdev1", 00:15:35.122 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:35.122 "strip_size_kb": 0, 00:15:35.122 "state": "online", 00:15:35.122 "raid_level": "raid1", 00:15:35.122 "superblock": true, 00:15:35.122 "num_base_bdevs": 4, 00:15:35.122 "num_base_bdevs_discovered": 3, 00:15:35.122 "num_base_bdevs_operational": 3, 00:15:35.122 "base_bdevs_list": [ 00:15:35.122 { 00:15:35.122 "name": "spare", 00:15:35.122 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:35.122 "is_configured": true, 00:15:35.122 "data_offset": 2048, 00:15:35.122 "data_size": 63488 00:15:35.122 }, 00:15:35.122 { 00:15:35.122 "name": null, 00:15:35.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.122 "is_configured": false, 00:15:35.122 "data_offset": 0, 00:15:35.122 "data_size": 63488 00:15:35.122 }, 00:15:35.122 { 00:15:35.122 "name": "BaseBdev3", 00:15:35.122 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:35.122 "is_configured": true, 00:15:35.122 "data_offset": 2048, 00:15:35.122 "data_size": 63488 00:15:35.122 }, 00:15:35.122 { 00:15:35.122 "name": "BaseBdev4", 00:15:35.122 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:35.122 "is_configured": true, 00:15:35.122 "data_offset": 2048, 00:15:35.122 "data_size": 63488 00:15:35.122 } 00:15:35.122 ] 00:15:35.122 }' 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.381 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.381 "name": "raid_bdev1", 00:15:35.381 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:35.381 "strip_size_kb": 0, 00:15:35.381 "state": "online", 00:15:35.381 "raid_level": "raid1", 00:15:35.382 "superblock": true, 00:15:35.382 "num_base_bdevs": 4, 00:15:35.382 "num_base_bdevs_discovered": 3, 00:15:35.382 "num_base_bdevs_operational": 3, 00:15:35.382 "base_bdevs_list": [ 00:15:35.382 { 00:15:35.382 "name": "spare", 00:15:35.382 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:35.382 "is_configured": true, 00:15:35.382 "data_offset": 2048, 00:15:35.382 "data_size": 63488 00:15:35.382 }, 00:15:35.382 { 00:15:35.382 "name": null, 00:15:35.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.382 "is_configured": false, 00:15:35.382 "data_offset": 0, 00:15:35.382 "data_size": 63488 00:15:35.382 }, 00:15:35.382 { 00:15:35.382 "name": "BaseBdev3", 00:15:35.382 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:35.382 "is_configured": true, 00:15:35.382 "data_offset": 2048, 00:15:35.382 "data_size": 63488 00:15:35.382 }, 00:15:35.382 { 00:15:35.382 "name": "BaseBdev4", 00:15:35.382 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:35.382 "is_configured": true, 00:15:35.382 "data_offset": 2048, 00:15:35.382 "data_size": 63488 00:15:35.382 } 00:15:35.382 ] 00:15:35.382 }' 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.382 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.641 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.641 "name": "raid_bdev1", 00:15:35.641 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:35.641 "strip_size_kb": 0, 00:15:35.641 "state": "online", 00:15:35.641 "raid_level": "raid1", 00:15:35.641 "superblock": true, 00:15:35.641 "num_base_bdevs": 4, 00:15:35.641 "num_base_bdevs_discovered": 3, 00:15:35.641 "num_base_bdevs_operational": 3, 00:15:35.641 "base_bdevs_list": [ 00:15:35.641 { 00:15:35.641 "name": "spare", 00:15:35.641 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:35.641 "is_configured": true, 00:15:35.641 "data_offset": 2048, 00:15:35.641 "data_size": 63488 00:15:35.641 }, 00:15:35.641 { 00:15:35.641 "name": null, 00:15:35.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.642 "is_configured": false, 00:15:35.642 "data_offset": 0, 00:15:35.642 "data_size": 63488 00:15:35.642 }, 00:15:35.642 { 00:15:35.642 "name": "BaseBdev3", 00:15:35.642 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:35.642 "is_configured": true, 00:15:35.642 "data_offset": 2048, 00:15:35.642 "data_size": 63488 00:15:35.642 }, 00:15:35.642 { 00:15:35.642 "name": "BaseBdev4", 00:15:35.642 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:35.642 "is_configured": true, 00:15:35.642 "data_offset": 2048, 00:15:35.642 "data_size": 63488 00:15:35.642 } 00:15:35.642 ] 00:15:35.642 }' 00:15:35.642 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.642 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.901 [2024-11-27 09:52:36.902730] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.901 [2024-11-27 09:52:36.902819] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.901 [2024-11-27 09:52:36.902975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.901 [2024-11-27 09:52:36.903124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.901 [2024-11-27 09:52:36.903182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.901 09:52:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:36.161 /dev/nbd0 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.161 1+0 records in 00:15:36.161 1+0 records out 00:15:36.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054991 s, 7.4 MB/s 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.161 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:36.421 /dev/nbd1 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.421 1+0 records in 00:15:36.421 1+0 records out 00:15:36.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439955 s, 9.3 MB/s 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.421 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.681 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.940 09:52:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.200 [2024-11-27 09:52:38.113100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.200 [2024-11-27 09:52:38.113220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.200 [2024-11-27 09:52:38.113271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:37.200 [2024-11-27 09:52:38.113308] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.200 [2024-11-27 09:52:38.116024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.200 [2024-11-27 09:52:38.116103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.200 [2024-11-27 09:52:38.116254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:37.200 [2024-11-27 09:52:38.116372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.200 [2024-11-27 09:52:38.116602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.200 [2024-11-27 09:52:38.116766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.200 spare 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.200 [2024-11-27 09:52:38.216714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:37.200 [2024-11-27 09:52:38.216779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:37.200 [2024-11-27 09:52:38.217143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:15:37.200 [2024-11-27 09:52:38.217369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:37.200 [2024-11-27 09:52:38.217421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:37.200 [2024-11-27 09:52:38.217645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.200 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.201 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.201 "name": "raid_bdev1", 00:15:37.201 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:37.201 "strip_size_kb": 0, 00:15:37.201 "state": "online", 00:15:37.201 "raid_level": "raid1", 00:15:37.201 "superblock": true, 00:15:37.201 "num_base_bdevs": 4, 00:15:37.201 "num_base_bdevs_discovered": 3, 00:15:37.201 "num_base_bdevs_operational": 3, 00:15:37.201 "base_bdevs_list": [ 00:15:37.201 { 00:15:37.201 "name": "spare", 00:15:37.201 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:37.201 "is_configured": true, 00:15:37.201 "data_offset": 2048, 00:15:37.201 "data_size": 63488 00:15:37.201 }, 00:15:37.201 { 00:15:37.201 "name": null, 00:15:37.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.201 "is_configured": false, 00:15:37.201 "data_offset": 2048, 00:15:37.201 "data_size": 63488 00:15:37.201 }, 00:15:37.201 { 00:15:37.201 "name": "BaseBdev3", 00:15:37.201 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:37.201 "is_configured": true, 00:15:37.201 "data_offset": 2048, 00:15:37.201 "data_size": 63488 00:15:37.201 }, 00:15:37.201 { 00:15:37.201 "name": "BaseBdev4", 00:15:37.201 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:37.201 "is_configured": true, 00:15:37.201 "data_offset": 2048, 00:15:37.201 "data_size": 63488 00:15:37.201 } 00:15:37.201 ] 00:15:37.201 }' 00:15:37.201 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.201 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.770 "name": "raid_bdev1", 00:15:37.770 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:37.770 "strip_size_kb": 0, 00:15:37.770 "state": "online", 00:15:37.770 "raid_level": "raid1", 00:15:37.770 "superblock": true, 00:15:37.770 "num_base_bdevs": 4, 00:15:37.770 "num_base_bdevs_discovered": 3, 00:15:37.770 "num_base_bdevs_operational": 3, 00:15:37.770 "base_bdevs_list": [ 00:15:37.770 { 00:15:37.770 "name": "spare", 00:15:37.770 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:37.770 "is_configured": true, 00:15:37.770 "data_offset": 2048, 00:15:37.770 "data_size": 63488 00:15:37.770 }, 00:15:37.770 { 00:15:37.770 "name": null, 00:15:37.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.770 "is_configured": false, 00:15:37.770 "data_offset": 2048, 00:15:37.770 "data_size": 63488 00:15:37.770 }, 00:15:37.770 { 00:15:37.770 "name": "BaseBdev3", 00:15:37.770 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:37.770 "is_configured": true, 00:15:37.770 "data_offset": 2048, 00:15:37.770 "data_size": 63488 00:15:37.770 }, 00:15:37.770 { 00:15:37.770 "name": "BaseBdev4", 00:15:37.770 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:37.770 "is_configured": true, 00:15:37.770 "data_offset": 2048, 00:15:37.770 "data_size": 63488 00:15:37.770 } 00:15:37.770 ] 00:15:37.770 }' 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.770 [2024-11-27 09:52:38.856610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.770 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.030 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.030 "name": "raid_bdev1", 00:15:38.030 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:38.030 "strip_size_kb": 0, 00:15:38.030 "state": "online", 00:15:38.030 "raid_level": "raid1", 00:15:38.030 "superblock": true, 00:15:38.030 "num_base_bdevs": 4, 00:15:38.030 "num_base_bdevs_discovered": 2, 00:15:38.030 "num_base_bdevs_operational": 2, 00:15:38.030 "base_bdevs_list": [ 00:15:38.030 { 00:15:38.030 "name": null, 00:15:38.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.030 "is_configured": false, 00:15:38.030 "data_offset": 0, 00:15:38.030 "data_size": 63488 00:15:38.030 }, 00:15:38.030 { 00:15:38.030 "name": null, 00:15:38.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.030 "is_configured": false, 00:15:38.030 "data_offset": 2048, 00:15:38.030 "data_size": 63488 00:15:38.030 }, 00:15:38.030 { 00:15:38.030 "name": "BaseBdev3", 00:15:38.030 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:38.030 "is_configured": true, 00:15:38.030 "data_offset": 2048, 00:15:38.030 "data_size": 63488 00:15:38.031 }, 00:15:38.031 { 00:15:38.031 "name": "BaseBdev4", 00:15:38.031 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:38.031 "is_configured": true, 00:15:38.031 "data_offset": 2048, 00:15:38.031 "data_size": 63488 00:15:38.031 } 00:15:38.031 ] 00:15:38.031 }' 00:15:38.031 09:52:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.031 09:52:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.289 09:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.289 09:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.289 09:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.289 [2024-11-27 09:52:39.260022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.289 [2024-11-27 09:52:39.260324] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:38.289 [2024-11-27 09:52:39.260395] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:38.289 [2024-11-27 09:52:39.260506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.289 [2024-11-27 09:52:39.275408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:15:38.289 09:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.289 09:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:38.289 [2024-11-27 09:52:39.277668] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.227 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.227 "name": "raid_bdev1", 00:15:39.227 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:39.227 "strip_size_kb": 0, 00:15:39.227 "state": "online", 00:15:39.227 "raid_level": "raid1", 00:15:39.227 "superblock": true, 00:15:39.227 "num_base_bdevs": 4, 00:15:39.227 "num_base_bdevs_discovered": 3, 00:15:39.227 "num_base_bdevs_operational": 3, 00:15:39.227 "process": { 00:15:39.227 "type": "rebuild", 00:15:39.227 "target": "spare", 00:15:39.227 "progress": { 00:15:39.227 "blocks": 20480, 00:15:39.227 "percent": 32 00:15:39.227 } 00:15:39.227 }, 00:15:39.227 "base_bdevs_list": [ 00:15:39.227 { 00:15:39.227 "name": "spare", 00:15:39.227 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:39.227 "is_configured": true, 00:15:39.227 "data_offset": 2048, 00:15:39.227 "data_size": 63488 00:15:39.227 }, 00:15:39.227 { 00:15:39.227 "name": null, 00:15:39.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.227 "is_configured": false, 00:15:39.227 "data_offset": 2048, 00:15:39.227 "data_size": 63488 00:15:39.227 }, 00:15:39.227 { 00:15:39.227 "name": "BaseBdev3", 00:15:39.227 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:39.227 "is_configured": true, 00:15:39.227 "data_offset": 2048, 00:15:39.227 "data_size": 63488 00:15:39.227 }, 00:15:39.227 { 00:15:39.227 "name": "BaseBdev4", 00:15:39.227 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:39.227 "is_configured": true, 00:15:39.227 "data_offset": 2048, 00:15:39.227 "data_size": 63488 00:15:39.228 } 00:15:39.228 ] 00:15:39.228 }' 00:15:39.228 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.487 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.487 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.487 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.487 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.488 [2024-11-27 09:52:40.421877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.488 [2024-11-27 09:52:40.486548] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.488 [2024-11-27 09:52:40.486689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.488 [2024-11-27 09:52:40.486737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.488 [2024-11-27 09:52:40.486779] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.488 "name": "raid_bdev1", 00:15:39.488 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:39.488 "strip_size_kb": 0, 00:15:39.488 "state": "online", 00:15:39.488 "raid_level": "raid1", 00:15:39.488 "superblock": true, 00:15:39.488 "num_base_bdevs": 4, 00:15:39.488 "num_base_bdevs_discovered": 2, 00:15:39.488 "num_base_bdevs_operational": 2, 00:15:39.488 "base_bdevs_list": [ 00:15:39.488 { 00:15:39.488 "name": null, 00:15:39.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.488 "is_configured": false, 00:15:39.488 "data_offset": 0, 00:15:39.488 "data_size": 63488 00:15:39.488 }, 00:15:39.488 { 00:15:39.488 "name": null, 00:15:39.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.488 "is_configured": false, 00:15:39.488 "data_offset": 2048, 00:15:39.488 "data_size": 63488 00:15:39.488 }, 00:15:39.488 { 00:15:39.488 "name": "BaseBdev3", 00:15:39.488 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:39.488 "is_configured": true, 00:15:39.488 "data_offset": 2048, 00:15:39.488 "data_size": 63488 00:15:39.488 }, 00:15:39.488 { 00:15:39.488 "name": "BaseBdev4", 00:15:39.488 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:39.488 "is_configured": true, 00:15:39.488 "data_offset": 2048, 00:15:39.488 "data_size": 63488 00:15:39.488 } 00:15:39.488 ] 00:15:39.488 }' 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.488 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.057 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:40.057 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.057 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.057 [2024-11-27 09:52:40.941685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:40.057 [2024-11-27 09:52:40.941858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.057 [2024-11-27 09:52:40.941925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:15:40.057 [2024-11-27 09:52:40.941965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.057 [2024-11-27 09:52:40.942636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.057 [2024-11-27 09:52:40.942719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:40.057 [2024-11-27 09:52:40.942894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:40.057 [2024-11-27 09:52:40.942945] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:40.057 [2024-11-27 09:52:40.943018] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:40.057 [2024-11-27 09:52:40.943067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.057 spare 00:15:40.057 [2024-11-27 09:52:40.957715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:15:40.057 09:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.057 09:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:40.057 [2024-11-27 09:52:40.959953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.996 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.997 09:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.997 "name": "raid_bdev1", 00:15:40.997 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:40.997 "strip_size_kb": 0, 00:15:40.997 "state": "online", 00:15:40.997 "raid_level": "raid1", 00:15:40.997 "superblock": true, 00:15:40.997 "num_base_bdevs": 4, 00:15:40.997 "num_base_bdevs_discovered": 3, 00:15:40.997 "num_base_bdevs_operational": 3, 00:15:40.997 "process": { 00:15:40.997 "type": "rebuild", 00:15:40.997 "target": "spare", 00:15:40.997 "progress": { 00:15:40.997 "blocks": 20480, 00:15:40.997 "percent": 32 00:15:40.997 } 00:15:40.997 }, 00:15:40.997 "base_bdevs_list": [ 00:15:40.997 { 00:15:40.997 "name": "spare", 00:15:40.997 "uuid": "92492a54-6b20-5090-8c0b-6630c18cc0e8", 00:15:40.997 "is_configured": true, 00:15:40.997 "data_offset": 2048, 00:15:40.997 "data_size": 63488 00:15:40.997 }, 00:15:40.997 { 00:15:40.997 "name": null, 00:15:40.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.997 "is_configured": false, 00:15:40.997 "data_offset": 2048, 00:15:40.997 "data_size": 63488 00:15:40.997 }, 00:15:40.997 { 00:15:40.997 "name": "BaseBdev3", 00:15:40.997 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:40.997 "is_configured": true, 00:15:40.997 "data_offset": 2048, 00:15:40.997 "data_size": 63488 00:15:40.997 }, 00:15:40.997 { 00:15:40.997 "name": "BaseBdev4", 00:15:40.997 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:40.997 "is_configured": true, 00:15:40.997 "data_offset": 2048, 00:15:40.997 "data_size": 63488 00:15:40.997 } 00:15:40.997 ] 00:15:40.997 }' 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.997 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.997 [2024-11-27 09:52:42.123630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.256 [2024-11-27 09:52:42.169275] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:41.257 [2024-11-27 09:52:42.169353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.257 [2024-11-27 09:52:42.169373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:41.257 [2024-11-27 09:52:42.169386] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.257 "name": "raid_bdev1", 00:15:41.257 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:41.257 "strip_size_kb": 0, 00:15:41.257 "state": "online", 00:15:41.257 "raid_level": "raid1", 00:15:41.257 "superblock": true, 00:15:41.257 "num_base_bdevs": 4, 00:15:41.257 "num_base_bdevs_discovered": 2, 00:15:41.257 "num_base_bdevs_operational": 2, 00:15:41.257 "base_bdevs_list": [ 00:15:41.257 { 00:15:41.257 "name": null, 00:15:41.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.257 "is_configured": false, 00:15:41.257 "data_offset": 0, 00:15:41.257 "data_size": 63488 00:15:41.257 }, 00:15:41.257 { 00:15:41.257 "name": null, 00:15:41.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.257 "is_configured": false, 00:15:41.257 "data_offset": 2048, 00:15:41.257 "data_size": 63488 00:15:41.257 }, 00:15:41.257 { 00:15:41.257 "name": "BaseBdev3", 00:15:41.257 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:41.257 "is_configured": true, 00:15:41.257 "data_offset": 2048, 00:15:41.257 "data_size": 63488 00:15:41.257 }, 00:15:41.257 { 00:15:41.257 "name": "BaseBdev4", 00:15:41.257 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:41.257 "is_configured": true, 00:15:41.257 "data_offset": 2048, 00:15:41.257 "data_size": 63488 00:15:41.257 } 00:15:41.257 ] 00:15:41.257 }' 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.257 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.516 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.516 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.516 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.516 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.516 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.776 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.776 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.776 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.776 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.776 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.776 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.776 "name": "raid_bdev1", 00:15:41.776 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:41.776 "strip_size_kb": 0, 00:15:41.776 "state": "online", 00:15:41.776 "raid_level": "raid1", 00:15:41.776 "superblock": true, 00:15:41.776 "num_base_bdevs": 4, 00:15:41.776 "num_base_bdevs_discovered": 2, 00:15:41.776 "num_base_bdevs_operational": 2, 00:15:41.776 "base_bdevs_list": [ 00:15:41.776 { 00:15:41.776 "name": null, 00:15:41.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.776 "is_configured": false, 00:15:41.776 "data_offset": 0, 00:15:41.776 "data_size": 63488 00:15:41.776 }, 00:15:41.776 { 00:15:41.776 "name": null, 00:15:41.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.776 "is_configured": false, 00:15:41.776 "data_offset": 2048, 00:15:41.776 "data_size": 63488 00:15:41.776 }, 00:15:41.776 { 00:15:41.776 "name": "BaseBdev3", 00:15:41.776 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:41.776 "is_configured": true, 00:15:41.776 "data_offset": 2048, 00:15:41.776 "data_size": 63488 00:15:41.776 }, 00:15:41.776 { 00:15:41.776 "name": "BaseBdev4", 00:15:41.776 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:41.776 "is_configured": true, 00:15:41.776 "data_offset": 2048, 00:15:41.776 "data_size": 63488 00:15:41.776 } 00:15:41.777 ] 00:15:41.777 }' 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.777 [2024-11-27 09:52:42.789129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:41.777 [2024-11-27 09:52:42.789218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.777 [2024-11-27 09:52:42.789248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:15:41.777 [2024-11-27 09:52:42.789264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.777 [2024-11-27 09:52:42.789850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.777 [2024-11-27 09:52:42.789892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:41.777 [2024-11-27 09:52:42.790021] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:41.777 [2024-11-27 09:52:42.790043] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:41.777 [2024-11-27 09:52:42.790053] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:41.777 [2024-11-27 09:52:42.790087] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:41.777 BaseBdev1 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.777 09:52:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.718 09:52:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.978 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.978 "name": "raid_bdev1", 00:15:42.978 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:42.978 "strip_size_kb": 0, 00:15:42.978 "state": "online", 00:15:42.978 "raid_level": "raid1", 00:15:42.978 "superblock": true, 00:15:42.978 "num_base_bdevs": 4, 00:15:42.978 "num_base_bdevs_discovered": 2, 00:15:42.978 "num_base_bdevs_operational": 2, 00:15:42.978 "base_bdevs_list": [ 00:15:42.978 { 00:15:42.978 "name": null, 00:15:42.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.978 "is_configured": false, 00:15:42.978 "data_offset": 0, 00:15:42.978 "data_size": 63488 00:15:42.978 }, 00:15:42.978 { 00:15:42.978 "name": null, 00:15:42.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.979 "is_configured": false, 00:15:42.979 "data_offset": 2048, 00:15:42.979 "data_size": 63488 00:15:42.979 }, 00:15:42.979 { 00:15:42.979 "name": "BaseBdev3", 00:15:42.979 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:42.979 "is_configured": true, 00:15:42.979 "data_offset": 2048, 00:15:42.979 "data_size": 63488 00:15:42.979 }, 00:15:42.979 { 00:15:42.979 "name": "BaseBdev4", 00:15:42.979 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:42.979 "is_configured": true, 00:15:42.979 "data_offset": 2048, 00:15:42.979 "data_size": 63488 00:15:42.979 } 00:15:42.979 ] 00:15:42.979 }' 00:15:42.979 09:52:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.979 09:52:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.240 "name": "raid_bdev1", 00:15:43.240 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:43.240 "strip_size_kb": 0, 00:15:43.240 "state": "online", 00:15:43.240 "raid_level": "raid1", 00:15:43.240 "superblock": true, 00:15:43.240 "num_base_bdevs": 4, 00:15:43.240 "num_base_bdevs_discovered": 2, 00:15:43.240 "num_base_bdevs_operational": 2, 00:15:43.240 "base_bdevs_list": [ 00:15:43.240 { 00:15:43.240 "name": null, 00:15:43.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.240 "is_configured": false, 00:15:43.240 "data_offset": 0, 00:15:43.240 "data_size": 63488 00:15:43.240 }, 00:15:43.240 { 00:15:43.240 "name": null, 00:15:43.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.240 "is_configured": false, 00:15:43.240 "data_offset": 2048, 00:15:43.240 "data_size": 63488 00:15:43.240 }, 00:15:43.240 { 00:15:43.240 "name": "BaseBdev3", 00:15:43.240 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:43.240 "is_configured": true, 00:15:43.240 "data_offset": 2048, 00:15:43.240 "data_size": 63488 00:15:43.240 }, 00:15:43.240 { 00:15:43.240 "name": "BaseBdev4", 00:15:43.240 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:43.240 "is_configured": true, 00:15:43.240 "data_offset": 2048, 00:15:43.240 "data_size": 63488 00:15:43.240 } 00:15:43.240 ] 00:15:43.240 }' 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.240 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.501 [2024-11-27 09:52:44.402596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.501 [2024-11-27 09:52:44.402942] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:43.501 [2024-11-27 09:52:44.403037] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:43.501 request: 00:15:43.501 { 00:15:43.501 "base_bdev": "BaseBdev1", 00:15:43.501 "raid_bdev": "raid_bdev1", 00:15:43.501 "method": "bdev_raid_add_base_bdev", 00:15:43.501 "req_id": 1 00:15:43.501 } 00:15:43.501 Got JSON-RPC error response 00:15:43.501 response: 00:15:43.501 { 00:15:43.501 "code": -22, 00:15:43.501 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:43.501 } 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.501 09:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.441 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.441 "name": "raid_bdev1", 00:15:44.441 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:44.441 "strip_size_kb": 0, 00:15:44.441 "state": "online", 00:15:44.441 "raid_level": "raid1", 00:15:44.441 "superblock": true, 00:15:44.441 "num_base_bdevs": 4, 00:15:44.441 "num_base_bdevs_discovered": 2, 00:15:44.441 "num_base_bdevs_operational": 2, 00:15:44.441 "base_bdevs_list": [ 00:15:44.441 { 00:15:44.441 "name": null, 00:15:44.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.441 "is_configured": false, 00:15:44.441 "data_offset": 0, 00:15:44.441 "data_size": 63488 00:15:44.441 }, 00:15:44.441 { 00:15:44.441 "name": null, 00:15:44.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.441 "is_configured": false, 00:15:44.442 "data_offset": 2048, 00:15:44.442 "data_size": 63488 00:15:44.442 }, 00:15:44.442 { 00:15:44.442 "name": "BaseBdev3", 00:15:44.442 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:44.442 "is_configured": true, 00:15:44.442 "data_offset": 2048, 00:15:44.442 "data_size": 63488 00:15:44.442 }, 00:15:44.442 { 00:15:44.442 "name": "BaseBdev4", 00:15:44.442 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:44.442 "is_configured": true, 00:15:44.442 "data_offset": 2048, 00:15:44.442 "data_size": 63488 00:15:44.442 } 00:15:44.442 ] 00:15:44.442 }' 00:15:44.442 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.442 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.011 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.011 "name": "raid_bdev1", 00:15:45.011 "uuid": "5f6a7c4c-a316-499f-a410-b7746c72f331", 00:15:45.011 "strip_size_kb": 0, 00:15:45.011 "state": "online", 00:15:45.011 "raid_level": "raid1", 00:15:45.011 "superblock": true, 00:15:45.011 "num_base_bdevs": 4, 00:15:45.011 "num_base_bdevs_discovered": 2, 00:15:45.011 "num_base_bdevs_operational": 2, 00:15:45.011 "base_bdevs_list": [ 00:15:45.011 { 00:15:45.011 "name": null, 00:15:45.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.011 "is_configured": false, 00:15:45.012 "data_offset": 0, 00:15:45.012 "data_size": 63488 00:15:45.012 }, 00:15:45.012 { 00:15:45.012 "name": null, 00:15:45.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.012 "is_configured": false, 00:15:45.012 "data_offset": 2048, 00:15:45.012 "data_size": 63488 00:15:45.012 }, 00:15:45.012 { 00:15:45.012 "name": "BaseBdev3", 00:15:45.012 "uuid": "09520f00-6048-5d16-88a0-bc2e72ea1242", 00:15:45.012 "is_configured": true, 00:15:45.012 "data_offset": 2048, 00:15:45.012 "data_size": 63488 00:15:45.012 }, 00:15:45.012 { 00:15:45.012 "name": "BaseBdev4", 00:15:45.012 "uuid": "7fd903cd-b6a4-5302-8884-d343c9a645d7", 00:15:45.012 "is_configured": true, 00:15:45.012 "data_offset": 2048, 00:15:45.012 "data_size": 63488 00:15:45.012 } 00:15:45.012 ] 00:15:45.012 }' 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78260 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78260 ']' 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78260 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.012 09:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78260 00:15:45.012 09:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.012 09:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.012 killing process with pid 78260 00:15:45.012 Received shutdown signal, test time was about 60.000000 seconds 00:15:45.012 00:15:45.012 Latency(us) 00:15:45.012 [2024-11-27T09:52:46.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.012 [2024-11-27T09:52:46.145Z] =================================================================================================================== 00:15:45.012 [2024-11-27T09:52:46.145Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.012 09:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78260' 00:15:45.012 09:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78260 00:15:45.012 [2024-11-27 09:52:46.033905] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.012 09:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78260 00:15:45.012 [2024-11-27 09:52:46.034081] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.012 [2024-11-27 09:52:46.034168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.012 [2024-11-27 09:52:46.034180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:45.581 [2024-11-27 09:52:46.549574] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:46.961 ************************************ 00:15:46.961 END TEST raid_rebuild_test_sb 00:15:46.961 ************************************ 00:15:46.961 00:15:46.961 real 0m25.807s 00:15:46.961 user 0m30.230s 00:15:46.961 sys 0m4.287s 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.961 09:52:47 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:46.961 09:52:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:46.961 09:52:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.961 09:52:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:46.961 ************************************ 00:15:46.961 START TEST raid_rebuild_test_io 00:15:46.961 ************************************ 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79025 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79025 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79025 ']' 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.961 09:52:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:46.961 [2024-11-27 09:52:47.936724] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:15:46.961 [2024-11-27 09:52:47.936939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79025 ] 00:15:46.961 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:46.961 Zero copy mechanism will not be used. 00:15:47.221 [2024-11-27 09:52:48.118115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.221 [2024-11-27 09:52:48.253994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.481 [2024-11-27 09:52:48.483738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.481 [2024-11-27 09:52:48.483788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.741 BaseBdev1_malloc 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.741 [2024-11-27 09:52:48.805970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:47.741 [2024-11-27 09:52:48.806131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.741 [2024-11-27 09:52:48.806177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:47.741 [2024-11-27 09:52:48.806234] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.741 [2024-11-27 09:52:48.808693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.741 [2024-11-27 09:52:48.808784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:47.741 BaseBdev1 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.741 BaseBdev2_malloc 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.741 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:47.742 [2024-11-27 09:52:48.867164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:47.742 [2024-11-27 09:52:48.867293] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.742 [2024-11-27 09:52:48.867345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:47.742 [2024-11-27 09:52:48.867388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.742 [2024-11-27 09:52:48.869798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.742 [2024-11-27 09:52:48.869887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.002 BaseBdev2 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 BaseBdev3_malloc 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 [2024-11-27 09:52:48.948687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.002 [2024-11-27 09:52:48.948828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.002 [2024-11-27 09:52:48.948887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.002 [2024-11-27 09:52:48.948927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.002 [2024-11-27 09:52:48.951467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.002 [2024-11-27 09:52:48.951556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.002 BaseBdev3 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 BaseBdev4_malloc 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 [2024-11-27 09:52:49.011088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:48.002 [2024-11-27 09:52:49.011228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.002 [2024-11-27 09:52:49.011275] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:48.002 [2024-11-27 09:52:49.011336] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.002 [2024-11-27 09:52:49.013755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.002 [2024-11-27 09:52:49.013847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:48.002 BaseBdev4 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 spare_malloc 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 spare_delay 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.002 [2024-11-27 09:52:49.079317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.002 [2024-11-27 09:52:49.079426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.002 [2024-11-27 09:52:49.079483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:48.002 [2024-11-27 09:52:49.079524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.002 [2024-11-27 09:52:49.081909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.002 [2024-11-27 09:52:49.082025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.002 spare 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.002 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.003 [2024-11-27 09:52:49.091344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.003 [2024-11-27 09:52:49.093507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.003 [2024-11-27 09:52:49.093626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.003 [2024-11-27 09:52:49.093728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:48.003 [2024-11-27 09:52:49.093854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.003 [2024-11-27 09:52:49.093906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:48.003 [2024-11-27 09:52:49.094226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:48.003 [2024-11-27 09:52:49.094463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.003 [2024-11-27 09:52:49.094517] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.003 [2024-11-27 09:52:49.094730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.003 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.261 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.261 "name": "raid_bdev1", 00:15:48.261 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:48.261 "strip_size_kb": 0, 00:15:48.261 "state": "online", 00:15:48.261 "raid_level": "raid1", 00:15:48.261 "superblock": false, 00:15:48.261 "num_base_bdevs": 4, 00:15:48.261 "num_base_bdevs_discovered": 4, 00:15:48.261 "num_base_bdevs_operational": 4, 00:15:48.261 "base_bdevs_list": [ 00:15:48.261 { 00:15:48.261 "name": "BaseBdev1", 00:15:48.261 "uuid": "a8a06a4b-fe09-5420-9801-6f5f997b9375", 00:15:48.261 "is_configured": true, 00:15:48.261 "data_offset": 0, 00:15:48.261 "data_size": 65536 00:15:48.261 }, 00:15:48.261 { 00:15:48.261 "name": "BaseBdev2", 00:15:48.261 "uuid": "ab18f075-742b-5266-9eb5-962881a71ac4", 00:15:48.261 "is_configured": true, 00:15:48.261 "data_offset": 0, 00:15:48.261 "data_size": 65536 00:15:48.261 }, 00:15:48.261 { 00:15:48.261 "name": "BaseBdev3", 00:15:48.261 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:48.261 "is_configured": true, 00:15:48.261 "data_offset": 0, 00:15:48.261 "data_size": 65536 00:15:48.261 }, 00:15:48.261 { 00:15:48.261 "name": "BaseBdev4", 00:15:48.261 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:48.261 "is_configured": true, 00:15:48.261 "data_offset": 0, 00:15:48.261 "data_size": 65536 00:15:48.261 } 00:15:48.261 ] 00:15:48.261 }' 00:15:48.261 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.262 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.523 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.523 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.524 [2024-11-27 09:52:49.554865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.524 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.524 [2024-11-27 09:52:49.650363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.789 "name": "raid_bdev1", 00:15:48.789 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:48.789 "strip_size_kb": 0, 00:15:48.789 "state": "online", 00:15:48.789 "raid_level": "raid1", 00:15:48.789 "superblock": false, 00:15:48.789 "num_base_bdevs": 4, 00:15:48.789 "num_base_bdevs_discovered": 3, 00:15:48.789 "num_base_bdevs_operational": 3, 00:15:48.789 "base_bdevs_list": [ 00:15:48.789 { 00:15:48.789 "name": null, 00:15:48.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.789 "is_configured": false, 00:15:48.789 "data_offset": 0, 00:15:48.789 "data_size": 65536 00:15:48.789 }, 00:15:48.789 { 00:15:48.789 "name": "BaseBdev2", 00:15:48.789 "uuid": "ab18f075-742b-5266-9eb5-962881a71ac4", 00:15:48.789 "is_configured": true, 00:15:48.789 "data_offset": 0, 00:15:48.789 "data_size": 65536 00:15:48.789 }, 00:15:48.789 { 00:15:48.789 "name": "BaseBdev3", 00:15:48.789 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:48.789 "is_configured": true, 00:15:48.789 "data_offset": 0, 00:15:48.789 "data_size": 65536 00:15:48.789 }, 00:15:48.789 { 00:15:48.789 "name": "BaseBdev4", 00:15:48.789 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:48.789 "is_configured": true, 00:15:48.789 "data_offset": 0, 00:15:48.789 "data_size": 65536 00:15:48.789 } 00:15:48.789 ] 00:15:48.789 }' 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.789 09:52:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:48.789 [2024-11-27 09:52:49.739227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:48.789 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:48.789 Zero copy mechanism will not be used. 00:15:48.789 Running I/O for 60 seconds... 00:15:49.056 09:52:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:49.056 09:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.056 09:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:49.056 [2024-11-27 09:52:50.084998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.056 09:52:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.056 09:52:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:49.056 [2024-11-27 09:52:50.176040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:49.056 [2024-11-27 09:52:50.178594] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:49.316 [2024-11-27 09:52:50.281746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:49.316 [2024-11-27 09:52:50.283952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:49.575 [2024-11-27 09:52:50.504936] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:49.575 [2024-11-27 09:52:50.506291] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:49.834 146.00 IOPS, 438.00 MiB/s [2024-11-27T09:52:50.967Z] [2024-11-27 09:52:50.830882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:50.094 [2024-11-27 09:52:51.045390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:50.094 [2024-11-27 09:52:51.046008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.094 "name": "raid_bdev1", 00:15:50.094 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:50.094 "strip_size_kb": 0, 00:15:50.094 "state": "online", 00:15:50.094 "raid_level": "raid1", 00:15:50.094 "superblock": false, 00:15:50.094 "num_base_bdevs": 4, 00:15:50.094 "num_base_bdevs_discovered": 4, 00:15:50.094 "num_base_bdevs_operational": 4, 00:15:50.094 "process": { 00:15:50.094 "type": "rebuild", 00:15:50.094 "target": "spare", 00:15:50.094 "progress": { 00:15:50.094 "blocks": 10240, 00:15:50.094 "percent": 15 00:15:50.094 } 00:15:50.094 }, 00:15:50.094 "base_bdevs_list": [ 00:15:50.094 { 00:15:50.094 "name": "spare", 00:15:50.094 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:50.094 "is_configured": true, 00:15:50.094 "data_offset": 0, 00:15:50.094 "data_size": 65536 00:15:50.094 }, 00:15:50.094 { 00:15:50.094 "name": "BaseBdev2", 00:15:50.094 "uuid": "ab18f075-742b-5266-9eb5-962881a71ac4", 00:15:50.094 "is_configured": true, 00:15:50.094 "data_offset": 0, 00:15:50.094 "data_size": 65536 00:15:50.094 }, 00:15:50.094 { 00:15:50.094 "name": "BaseBdev3", 00:15:50.094 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:50.094 "is_configured": true, 00:15:50.094 "data_offset": 0, 00:15:50.094 "data_size": 65536 00:15:50.094 }, 00:15:50.094 { 00:15:50.094 "name": "BaseBdev4", 00:15:50.094 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:50.094 "is_configured": true, 00:15:50.094 "data_offset": 0, 00:15:50.094 "data_size": 65536 00:15:50.094 } 00:15:50.094 ] 00:15:50.094 }' 00:15:50.094 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.354 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.354 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.354 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.354 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:50.354 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.354 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.354 [2024-11-27 09:52:51.299105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:50.354 [2024-11-27 09:52:51.301834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:50.354 [2024-11-27 09:52:51.305306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.354 [2024-11-27 09:52:51.405328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:50.354 [2024-11-27 09:52:51.405879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:50.354 [2024-11-27 09:52:51.417307] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:50.354 [2024-11-27 09:52:51.432931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.354 [2024-11-27 09:52:51.433035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.354 [2024-11-27 09:52:51.433054] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:50.354 [2024-11-27 09:52:51.465767] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.614 "name": "raid_bdev1", 00:15:50.614 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:50.614 "strip_size_kb": 0, 00:15:50.614 "state": "online", 00:15:50.614 "raid_level": "raid1", 00:15:50.614 "superblock": false, 00:15:50.614 "num_base_bdevs": 4, 00:15:50.614 "num_base_bdevs_discovered": 3, 00:15:50.614 "num_base_bdevs_operational": 3, 00:15:50.614 "base_bdevs_list": [ 00:15:50.614 { 00:15:50.614 "name": null, 00:15:50.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.614 "is_configured": false, 00:15:50.614 "data_offset": 0, 00:15:50.614 "data_size": 65536 00:15:50.614 }, 00:15:50.614 { 00:15:50.614 "name": "BaseBdev2", 00:15:50.614 "uuid": "ab18f075-742b-5266-9eb5-962881a71ac4", 00:15:50.614 "is_configured": true, 00:15:50.614 "data_offset": 0, 00:15:50.614 "data_size": 65536 00:15:50.614 }, 00:15:50.614 { 00:15:50.614 "name": "BaseBdev3", 00:15:50.614 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:50.614 "is_configured": true, 00:15:50.614 "data_offset": 0, 00:15:50.614 "data_size": 65536 00:15:50.614 }, 00:15:50.614 { 00:15:50.614 "name": "BaseBdev4", 00:15:50.614 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:50.614 "is_configured": true, 00:15:50.614 "data_offset": 0, 00:15:50.614 "data_size": 65536 00:15:50.614 } 00:15:50.614 ] 00:15:50.614 }' 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.614 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 130.00 IOPS, 390.00 MiB/s [2024-11-27T09:52:52.007Z] 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.874 "name": "raid_bdev1", 00:15:50.874 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:50.874 "strip_size_kb": 0, 00:15:50.874 "state": "online", 00:15:50.874 "raid_level": "raid1", 00:15:50.874 "superblock": false, 00:15:50.874 "num_base_bdevs": 4, 00:15:50.874 "num_base_bdevs_discovered": 3, 00:15:50.874 "num_base_bdevs_operational": 3, 00:15:50.874 "base_bdevs_list": [ 00:15:50.874 { 00:15:50.874 "name": null, 00:15:50.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.874 "is_configured": false, 00:15:50.874 "data_offset": 0, 00:15:50.874 "data_size": 65536 00:15:50.874 }, 00:15:50.874 { 00:15:50.874 "name": "BaseBdev2", 00:15:50.874 "uuid": "ab18f075-742b-5266-9eb5-962881a71ac4", 00:15:50.874 "is_configured": true, 00:15:50.874 "data_offset": 0, 00:15:50.874 "data_size": 65536 00:15:50.874 }, 00:15:50.874 { 00:15:50.874 "name": "BaseBdev3", 00:15:50.874 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:50.874 "is_configured": true, 00:15:50.874 "data_offset": 0, 00:15:50.874 "data_size": 65536 00:15:50.874 }, 00:15:50.874 { 00:15:50.874 "name": "BaseBdev4", 00:15:50.874 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:50.874 "is_configured": true, 00:15:50.874 "data_offset": 0, 00:15:50.874 "data_size": 65536 00:15:50.874 } 00:15:50.874 ] 00:15:50.874 }' 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.874 09:52:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:50.874 [2024-11-27 09:52:52.004231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.133 09:52:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.133 09:52:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:51.133 [2024-11-27 09:52:52.110342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:51.133 [2024-11-27 09:52:52.112726] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:51.134 [2024-11-27 09:52:52.246384] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:51.134 [2024-11-27 09:52:52.246922] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:51.393 [2024-11-27 09:52:52.465331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:51.393 [2024-11-27 09:52:52.466533] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:51.910 127.67 IOPS, 383.00 MiB/s [2024-11-27T09:52:53.043Z] [2024-11-27 09:52:52.859497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.170 [2024-11-27 09:52:53.078295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:52.170 [2024-11-27 09:52:53.079467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.170 "name": "raid_bdev1", 00:15:52.170 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:52.170 "strip_size_kb": 0, 00:15:52.170 "state": "online", 00:15:52.170 "raid_level": "raid1", 00:15:52.170 "superblock": false, 00:15:52.170 "num_base_bdevs": 4, 00:15:52.170 "num_base_bdevs_discovered": 4, 00:15:52.170 "num_base_bdevs_operational": 4, 00:15:52.170 "process": { 00:15:52.170 "type": "rebuild", 00:15:52.170 "target": "spare", 00:15:52.170 "progress": { 00:15:52.170 "blocks": 8192, 00:15:52.170 "percent": 12 00:15:52.170 } 00:15:52.170 }, 00:15:52.170 "base_bdevs_list": [ 00:15:52.170 { 00:15:52.170 "name": "spare", 00:15:52.170 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:52.170 "is_configured": true, 00:15:52.170 "data_offset": 0, 00:15:52.170 "data_size": 65536 00:15:52.170 }, 00:15:52.170 { 00:15:52.170 "name": "BaseBdev2", 00:15:52.170 "uuid": "ab18f075-742b-5266-9eb5-962881a71ac4", 00:15:52.170 "is_configured": true, 00:15:52.170 "data_offset": 0, 00:15:52.170 "data_size": 65536 00:15:52.170 }, 00:15:52.170 { 00:15:52.170 "name": "BaseBdev3", 00:15:52.170 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:52.170 "is_configured": true, 00:15:52.170 "data_offset": 0, 00:15:52.170 "data_size": 65536 00:15:52.170 }, 00:15:52.170 { 00:15:52.170 "name": "BaseBdev4", 00:15:52.170 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:52.170 "is_configured": true, 00:15:52.170 "data_offset": 0, 00:15:52.170 "data_size": 65536 00:15:52.170 } 00:15:52.170 ] 00:15:52.170 }' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.170 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.170 [2024-11-27 09:52:53.195658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.429 [2024-11-27 09:52:53.319957] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:52.429 [2024-11-27 09:52:53.320156] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.430 "name": "raid_bdev1", 00:15:52.430 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:52.430 "strip_size_kb": 0, 00:15:52.430 "state": "online", 00:15:52.430 "raid_level": "raid1", 00:15:52.430 "superblock": false, 00:15:52.430 "num_base_bdevs": 4, 00:15:52.430 "num_base_bdevs_discovered": 3, 00:15:52.430 "num_base_bdevs_operational": 3, 00:15:52.430 "process": { 00:15:52.430 "type": "rebuild", 00:15:52.430 "target": "spare", 00:15:52.430 "progress": { 00:15:52.430 "blocks": 12288, 00:15:52.430 "percent": 18 00:15:52.430 } 00:15:52.430 }, 00:15:52.430 "base_bdevs_list": [ 00:15:52.430 { 00:15:52.430 "name": "spare", 00:15:52.430 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:52.430 "is_configured": true, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": null, 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "is_configured": false, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev3", 00:15:52.430 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:52.430 "is_configured": true, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev4", 00:15:52.430 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:52.430 "is_configured": true, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 } 00:15:52.430 ] 00:15:52.430 }' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.430 "name": "raid_bdev1", 00:15:52.430 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:52.430 "strip_size_kb": 0, 00:15:52.430 "state": "online", 00:15:52.430 "raid_level": "raid1", 00:15:52.430 "superblock": false, 00:15:52.430 "num_base_bdevs": 4, 00:15:52.430 "num_base_bdevs_discovered": 3, 00:15:52.430 "num_base_bdevs_operational": 3, 00:15:52.430 "process": { 00:15:52.430 "type": "rebuild", 00:15:52.430 "target": "spare", 00:15:52.430 "progress": { 00:15:52.430 "blocks": 14336, 00:15:52.430 "percent": 21 00:15:52.430 } 00:15:52.430 }, 00:15:52.430 "base_bdevs_list": [ 00:15:52.430 { 00:15:52.430 "name": "spare", 00:15:52.430 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:52.430 "is_configured": true, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": null, 00:15:52.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.430 "is_configured": false, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev3", 00:15:52.430 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:52.430 "is_configured": true, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 }, 00:15:52.430 { 00:15:52.430 "name": "BaseBdev4", 00:15:52.430 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:52.430 "is_configured": true, 00:15:52.430 "data_offset": 0, 00:15:52.430 "data_size": 65536 00:15:52.430 } 00:15:52.430 ] 00:15:52.430 }' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.430 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.689 [2024-11-27 09:52:53.577576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:52.689 [2024-11-27 09:52:53.577860] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:52.689 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.689 09:52:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.948 113.25 IOPS, 339.75 MiB/s [2024-11-27T09:52:54.081Z] [2024-11-27 09:52:53.821722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:52.948 [2024-11-27 09:52:53.822308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:52.948 [2024-11-27 09:52:54.046519] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:53.517 [2024-11-27 09:52:54.532811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:53.517 09:52:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.777 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.777 "name": "raid_bdev1", 00:15:53.777 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:53.777 "strip_size_kb": 0, 00:15:53.777 "state": "online", 00:15:53.777 "raid_level": "raid1", 00:15:53.777 "superblock": false, 00:15:53.777 "num_base_bdevs": 4, 00:15:53.777 "num_base_bdevs_discovered": 3, 00:15:53.777 "num_base_bdevs_operational": 3, 00:15:53.777 "process": { 00:15:53.777 "type": "rebuild", 00:15:53.777 "target": "spare", 00:15:53.777 "progress": { 00:15:53.777 "blocks": 28672, 00:15:53.777 "percent": 43 00:15:53.777 } 00:15:53.777 }, 00:15:53.777 "base_bdevs_list": [ 00:15:53.777 { 00:15:53.777 "name": "spare", 00:15:53.777 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:53.777 "is_configured": true, 00:15:53.777 "data_offset": 0, 00:15:53.777 "data_size": 65536 00:15:53.777 }, 00:15:53.777 { 00:15:53.777 "name": null, 00:15:53.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.777 "is_configured": false, 00:15:53.777 "data_offset": 0, 00:15:53.777 "data_size": 65536 00:15:53.777 }, 00:15:53.777 { 00:15:53.777 "name": "BaseBdev3", 00:15:53.777 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:53.777 "is_configured": true, 00:15:53.777 "data_offset": 0, 00:15:53.777 "data_size": 65536 00:15:53.777 }, 00:15:53.777 { 00:15:53.777 "name": "BaseBdev4", 00:15:53.777 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:53.777 "is_configured": true, 00:15:53.777 "data_offset": 0, 00:15:53.777 "data_size": 65536 00:15:53.777 } 00:15:53.777 ] 00:15:53.777 }' 00:15:53.777 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.777 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.777 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.777 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.777 09:52:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.777 100.60 IOPS, 301.80 MiB/s [2024-11-27T09:52:54.910Z] [2024-11-27 09:52:54.869632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:53.777 [2024-11-27 09:52:54.871217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:54.037 [2024-11-27 09:52:55.079512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:54.037 [2024-11-27 09:52:55.080495] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:54.606 [2024-11-27 09:52:55.428351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:54.606 [2024-11-27 09:52:55.651520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:54.606 [2024-11-27 09:52:55.651946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.866 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.867 90.00 IOPS, 270.00 MiB/s [2024-11-27T09:52:56.000Z] 09:52:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.867 "name": "raid_bdev1", 00:15:54.867 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:54.867 "strip_size_kb": 0, 00:15:54.867 "state": "online", 00:15:54.867 "raid_level": "raid1", 00:15:54.867 "superblock": false, 00:15:54.867 "num_base_bdevs": 4, 00:15:54.867 "num_base_bdevs_discovered": 3, 00:15:54.867 "num_base_bdevs_operational": 3, 00:15:54.867 "process": { 00:15:54.867 "type": "rebuild", 00:15:54.867 "target": "spare", 00:15:54.867 "progress": { 00:15:54.867 "blocks": 40960, 00:15:54.867 "percent": 62 00:15:54.867 } 00:15:54.867 }, 00:15:54.867 "base_bdevs_list": [ 00:15:54.867 { 00:15:54.867 "name": "spare", 00:15:54.867 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:54.867 "is_configured": true, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 }, 00:15:54.867 { 00:15:54.867 "name": null, 00:15:54.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.867 "is_configured": false, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 }, 00:15:54.867 { 00:15:54.867 "name": "BaseBdev3", 00:15:54.867 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:54.867 "is_configured": true, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 }, 00:15:54.867 { 00:15:54.867 "name": "BaseBdev4", 00:15:54.867 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:54.867 "is_configured": true, 00:15:54.867 "data_offset": 0, 00:15:54.867 "data_size": 65536 00:15:54.867 } 00:15:54.867 ] 00:15:54.867 }' 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.867 09:52:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:55.126 [2024-11-27 09:52:56.101083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:55.386 [2024-11-27 09:52:56.335807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:55.906 83.57 IOPS, 250.71 MiB/s [2024-11-27T09:52:57.039Z] [2024-11-27 09:52:56.791623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.906 "name": "raid_bdev1", 00:15:55.906 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:55.906 "strip_size_kb": 0, 00:15:55.906 "state": "online", 00:15:55.906 "raid_level": "raid1", 00:15:55.906 "superblock": false, 00:15:55.906 "num_base_bdevs": 4, 00:15:55.906 "num_base_bdevs_discovered": 3, 00:15:55.906 "num_base_bdevs_operational": 3, 00:15:55.906 "process": { 00:15:55.906 "type": "rebuild", 00:15:55.906 "target": "spare", 00:15:55.906 "progress": { 00:15:55.906 "blocks": 57344, 00:15:55.906 "percent": 87 00:15:55.906 } 00:15:55.906 }, 00:15:55.906 "base_bdevs_list": [ 00:15:55.906 { 00:15:55.906 "name": "spare", 00:15:55.906 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:55.906 "is_configured": true, 00:15:55.906 "data_offset": 0, 00:15:55.906 "data_size": 65536 00:15:55.906 }, 00:15:55.906 { 00:15:55.906 "name": null, 00:15:55.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.906 "is_configured": false, 00:15:55.906 "data_offset": 0, 00:15:55.906 "data_size": 65536 00:15:55.906 }, 00:15:55.906 { 00:15:55.906 "name": "BaseBdev3", 00:15:55.906 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:55.906 "is_configured": true, 00:15:55.906 "data_offset": 0, 00:15:55.906 "data_size": 65536 00:15:55.906 }, 00:15:55.906 { 00:15:55.906 "name": "BaseBdev4", 00:15:55.906 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:55.906 "is_configured": true, 00:15:55.906 "data_offset": 0, 00:15:55.906 "data_size": 65536 00:15:55.906 } 00:15:55.906 ] 00:15:55.906 }' 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.906 09:52:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.906 [2024-11-27 09:52:57.002691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:55.906 [2024-11-27 09:52:57.003254] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:56.166 09:52:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.166 09:52:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.426 [2024-11-27 09:52:57.342735] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:56.426 [2024-11-27 09:52:57.442468] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:56.426 [2024-11-27 09:52:57.444824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.945 78.12 IOPS, 234.38 MiB/s [2024-11-27T09:52:58.078Z] 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.945 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.205 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.206 "name": "raid_bdev1", 00:15:57.206 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:57.206 "strip_size_kb": 0, 00:15:57.206 "state": "online", 00:15:57.206 "raid_level": "raid1", 00:15:57.206 "superblock": false, 00:15:57.206 "num_base_bdevs": 4, 00:15:57.206 "num_base_bdevs_discovered": 3, 00:15:57.206 "num_base_bdevs_operational": 3, 00:15:57.206 "base_bdevs_list": [ 00:15:57.206 { 00:15:57.206 "name": "spare", 00:15:57.206 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:57.206 "is_configured": true, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 }, 00:15:57.206 { 00:15:57.206 "name": null, 00:15:57.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.206 "is_configured": false, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 }, 00:15:57.206 { 00:15:57.206 "name": "BaseBdev3", 00:15:57.206 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:57.206 "is_configured": true, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 }, 00:15:57.206 { 00:15:57.206 "name": "BaseBdev4", 00:15:57.206 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:57.206 "is_configured": true, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 } 00:15:57.206 ] 00:15:57.206 }' 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.206 "name": "raid_bdev1", 00:15:57.206 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:57.206 "strip_size_kb": 0, 00:15:57.206 "state": "online", 00:15:57.206 "raid_level": "raid1", 00:15:57.206 "superblock": false, 00:15:57.206 "num_base_bdevs": 4, 00:15:57.206 "num_base_bdevs_discovered": 3, 00:15:57.206 "num_base_bdevs_operational": 3, 00:15:57.206 "base_bdevs_list": [ 00:15:57.206 { 00:15:57.206 "name": "spare", 00:15:57.206 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:57.206 "is_configured": true, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 }, 00:15:57.206 { 00:15:57.206 "name": null, 00:15:57.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.206 "is_configured": false, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 }, 00:15:57.206 { 00:15:57.206 "name": "BaseBdev3", 00:15:57.206 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:57.206 "is_configured": true, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 }, 00:15:57.206 { 00:15:57.206 "name": "BaseBdev4", 00:15:57.206 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:57.206 "is_configured": true, 00:15:57.206 "data_offset": 0, 00:15:57.206 "data_size": 65536 00:15:57.206 } 00:15:57.206 ] 00:15:57.206 }' 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.206 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.465 "name": "raid_bdev1", 00:15:57.465 "uuid": "75749c57-c4db-4d80-8f84-df22952aa4f7", 00:15:57.465 "strip_size_kb": 0, 00:15:57.465 "state": "online", 00:15:57.465 "raid_level": "raid1", 00:15:57.465 "superblock": false, 00:15:57.465 "num_base_bdevs": 4, 00:15:57.465 "num_base_bdevs_discovered": 3, 00:15:57.465 "num_base_bdevs_operational": 3, 00:15:57.465 "base_bdevs_list": [ 00:15:57.465 { 00:15:57.465 "name": "spare", 00:15:57.465 "uuid": "b249852a-ce76-5004-8d21-140d89d2edf0", 00:15:57.465 "is_configured": true, 00:15:57.465 "data_offset": 0, 00:15:57.465 "data_size": 65536 00:15:57.465 }, 00:15:57.465 { 00:15:57.465 "name": null, 00:15:57.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.465 "is_configured": false, 00:15:57.465 "data_offset": 0, 00:15:57.465 "data_size": 65536 00:15:57.465 }, 00:15:57.465 { 00:15:57.465 "name": "BaseBdev3", 00:15:57.465 "uuid": "8ad236ad-c100-592f-b9e0-5990f5f123b7", 00:15:57.465 "is_configured": true, 00:15:57.465 "data_offset": 0, 00:15:57.465 "data_size": 65536 00:15:57.465 }, 00:15:57.465 { 00:15:57.465 "name": "BaseBdev4", 00:15:57.465 "uuid": "d9821218-c284-5afa-b782-5dca82c74d53", 00:15:57.465 "is_configured": true, 00:15:57.465 "data_offset": 0, 00:15:57.465 "data_size": 65536 00:15:57.465 } 00:15:57.465 ] 00:15:57.465 }' 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.465 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.724 73.00 IOPS, 219.00 MiB/s [2024-11-27T09:52:58.857Z] 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.724 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.724 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.724 [2024-11-27 09:52:58.842824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.724 [2024-11-27 09:52:58.842925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.984 00:15:57.984 Latency(us) 00:15:57.984 [2024-11-27T09:52:59.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.984 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:57.984 raid_bdev1 : 9.16 72.17 216.52 0.00 0.00 18961.22 334.48 119052.30 00:15:57.984 [2024-11-27T09:52:59.117Z] =================================================================================================================== 00:15:57.984 [2024-11-27T09:52:59.117Z] Total : 72.17 216.52 0.00 0.00 18961.22 334.48 119052.30 00:15:57.984 { 00:15:57.984 "results": [ 00:15:57.984 { 00:15:57.984 "job": "raid_bdev1", 00:15:57.984 "core_mask": "0x1", 00:15:57.984 "workload": "randrw", 00:15:57.984 "percentage": 50, 00:15:57.984 "status": "finished", 00:15:57.984 "queue_depth": 2, 00:15:57.984 "io_size": 3145728, 00:15:57.984 "runtime": 9.158558, 00:15:57.984 "iops": 72.17293377407229, 00:15:57.984 "mibps": 216.51880132221686, 00:15:57.984 "io_failed": 0, 00:15:57.984 "io_timeout": 0, 00:15:57.984 "avg_latency_us": 18961.224176680826, 00:15:57.984 "min_latency_us": 334.4768558951965, 00:15:57.984 "max_latency_us": 119052.29694323144 00:15:57.984 } 00:15:57.984 ], 00:15:57.984 "core_count": 1 00:15:57.984 } 00:15:57.984 [2024-11-27 09:52:58.904282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.984 [2024-11-27 09:52:58.904369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.984 [2024-11-27 09:52:58.904486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.984 [2024-11-27 09:52:58.904507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.984 09:52:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:58.243 /dev/nbd0 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.243 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.243 1+0 records in 00:15:58.243 1+0 records out 00:15:58.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357619 s, 11.5 MB/s 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.244 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:58.503 /dev/nbd1 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.503 1+0 records in 00:15:58.503 1+0 records out 00:15:58.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561556 s, 7.3 MB/s 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.503 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:58.762 09:52:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:59.021 /dev/nbd1 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.021 1+0 records in 00:15:59.021 1+0 records out 00:15:59.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358614 s, 11.4 MB/s 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:59.021 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.289 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:59.548 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79025 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79025 ']' 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79025 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.549 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79025 00:15:59.809 killing process with pid 79025 00:15:59.809 Received shutdown signal, test time was about 10.969880 seconds 00:15:59.809 00:15:59.809 Latency(us) 00:15:59.809 [2024-11-27T09:53:00.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.809 [2024-11-27T09:53:00.942Z] =================================================================================================================== 00:15:59.809 [2024-11-27T09:53:00.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:59.809 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.809 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.809 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79025' 00:15:59.809 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79025 00:15:59.809 [2024-11-27 09:53:00.690455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.809 09:53:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79025 00:16:00.070 [2024-11-27 09:53:01.137628] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:01.451 00:16:01.451 real 0m14.577s 00:16:01.451 user 0m17.929s 00:16:01.451 sys 0m2.024s 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.451 ************************************ 00:16:01.451 END TEST raid_rebuild_test_io 00:16:01.451 ************************************ 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.451 09:53:02 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:01.451 09:53:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:01.451 09:53:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.451 09:53:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.451 ************************************ 00:16:01.451 START TEST raid_rebuild_test_sb_io 00:16:01.451 ************************************ 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79459 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79459 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79459 ']' 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.451 09:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:01.711 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:01.711 Zero copy mechanism will not be used. 00:16:01.711 [2024-11-27 09:53:02.586599] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:16:01.711 [2024-11-27 09:53:02.586813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79459 ] 00:16:01.711 [2024-11-27 09:53:02.766144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.970 [2024-11-27 09:53:02.903235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.230 [2024-11-27 09:53:03.143041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.230 [2024-11-27 09:53:03.143218] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.489 BaseBdev1_malloc 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.489 [2024-11-27 09:53:03.467086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:02.489 [2024-11-27 09:53:03.467235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.489 [2024-11-27 09:53:03.467285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:02.489 [2024-11-27 09:53:03.467326] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.489 [2024-11-27 09:53:03.469811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.489 [2024-11-27 09:53:03.469905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:02.489 BaseBdev1 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.489 BaseBdev2_malloc 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.489 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.490 [2024-11-27 09:53:03.530854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:02.490 [2024-11-27 09:53:03.531013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.490 [2024-11-27 09:53:03.531066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:02.490 [2024-11-27 09:53:03.531111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.490 [2024-11-27 09:53:03.533669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.490 [2024-11-27 09:53:03.533759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:02.490 BaseBdev2 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.490 BaseBdev3_malloc 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.490 [2024-11-27 09:53:03.605448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:02.490 [2024-11-27 09:53:03.605577] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.490 [2024-11-27 09:53:03.605654] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.490 [2024-11-27 09:53:03.605698] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.490 [2024-11-27 09:53:03.608151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.490 [2024-11-27 09:53:03.608240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:02.490 BaseBdev3 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.490 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.749 BaseBdev4_malloc 00:16:02.749 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.749 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:02.749 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.749 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.749 [2024-11-27 09:53:03.667538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:02.749 [2024-11-27 09:53:03.667697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.749 [2024-11-27 09:53:03.667746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:02.749 [2024-11-27 09:53:03.667791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.749 [2024-11-27 09:53:03.670303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.750 [2024-11-27 09:53:03.670391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:02.750 BaseBdev4 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.750 spare_malloc 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.750 spare_delay 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.750 [2024-11-27 09:53:03.737329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:02.750 [2024-11-27 09:53:03.737453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.750 [2024-11-27 09:53:03.737497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:02.750 [2024-11-27 09:53:03.737537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.750 [2024-11-27 09:53:03.739987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.750 [2024-11-27 09:53:03.740090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:02.750 spare 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.750 [2024-11-27 09:53:03.749358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.750 [2024-11-27 09:53:03.751553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.750 [2024-11-27 09:53:03.751674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.750 [2024-11-27 09:53:03.751763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:02.750 [2024-11-27 09:53:03.752049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:02.750 [2024-11-27 09:53:03.752112] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:02.750 [2024-11-27 09:53:03.752432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:02.750 [2024-11-27 09:53:03.752721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:02.750 [2024-11-27 09:53:03.752776] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:02.750 [2024-11-27 09:53:03.753013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.750 "name": "raid_bdev1", 00:16:02.750 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:02.750 "strip_size_kb": 0, 00:16:02.750 "state": "online", 00:16:02.750 "raid_level": "raid1", 00:16:02.750 "superblock": true, 00:16:02.750 "num_base_bdevs": 4, 00:16:02.750 "num_base_bdevs_discovered": 4, 00:16:02.750 "num_base_bdevs_operational": 4, 00:16:02.750 "base_bdevs_list": [ 00:16:02.750 { 00:16:02.750 "name": "BaseBdev1", 00:16:02.750 "uuid": "c110ef68-18b9-5cad-b6cb-43ab42adbd6b", 00:16:02.750 "is_configured": true, 00:16:02.750 "data_offset": 2048, 00:16:02.750 "data_size": 63488 00:16:02.750 }, 00:16:02.750 { 00:16:02.750 "name": "BaseBdev2", 00:16:02.750 "uuid": "b5659c25-cbe3-5115-ad9a-e62c39fdc041", 00:16:02.750 "is_configured": true, 00:16:02.750 "data_offset": 2048, 00:16:02.750 "data_size": 63488 00:16:02.750 }, 00:16:02.750 { 00:16:02.750 "name": "BaseBdev3", 00:16:02.750 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:02.750 "is_configured": true, 00:16:02.750 "data_offset": 2048, 00:16:02.750 "data_size": 63488 00:16:02.750 }, 00:16:02.750 { 00:16:02.750 "name": "BaseBdev4", 00:16:02.750 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:02.750 "is_configured": true, 00:16:02.750 "data_offset": 2048, 00:16:02.750 "data_size": 63488 00:16:02.750 } 00:16:02.750 ] 00:16:02.750 }' 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.750 09:53:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 [2024-11-27 09:53:04.264963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 [2024-11-27 09:53:04.356400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.319 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.320 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.320 "name": "raid_bdev1", 00:16:03.320 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:03.320 "strip_size_kb": 0, 00:16:03.320 "state": "online", 00:16:03.320 "raid_level": "raid1", 00:16:03.320 "superblock": true, 00:16:03.320 "num_base_bdevs": 4, 00:16:03.320 "num_base_bdevs_discovered": 3, 00:16:03.320 "num_base_bdevs_operational": 3, 00:16:03.320 "base_bdevs_list": [ 00:16:03.320 { 00:16:03.320 "name": null, 00:16:03.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.320 "is_configured": false, 00:16:03.320 "data_offset": 0, 00:16:03.320 "data_size": 63488 00:16:03.320 }, 00:16:03.320 { 00:16:03.320 "name": "BaseBdev2", 00:16:03.320 "uuid": "b5659c25-cbe3-5115-ad9a-e62c39fdc041", 00:16:03.320 "is_configured": true, 00:16:03.320 "data_offset": 2048, 00:16:03.320 "data_size": 63488 00:16:03.320 }, 00:16:03.320 { 00:16:03.320 "name": "BaseBdev3", 00:16:03.320 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:03.320 "is_configured": true, 00:16:03.320 "data_offset": 2048, 00:16:03.320 "data_size": 63488 00:16:03.320 }, 00:16:03.320 { 00:16:03.320 "name": "BaseBdev4", 00:16:03.320 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:03.320 "is_configured": true, 00:16:03.320 "data_offset": 2048, 00:16:03.320 "data_size": 63488 00:16:03.320 } 00:16:03.320 ] 00:16:03.320 }' 00:16:03.320 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.320 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.579 [2024-11-27 09:53:04.458298] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:03.579 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.579 Zero copy mechanism will not be used. 00:16:03.579 Running I/O for 60 seconds... 00:16:03.837 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:03.837 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.837 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:03.837 [2024-11-27 09:53:04.797209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:03.837 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.837 09:53:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:03.837 [2024-11-27 09:53:04.847587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:03.837 [2024-11-27 09:53:04.850153] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.102 [2024-11-27 09:53:04.969868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:04.102 [2024-11-27 09:53:04.970977] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:04.102 [2024-11-27 09:53:05.096234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:04.102 [2024-11-27 09:53:05.097653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:04.400 123.00 IOPS, 369.00 MiB/s [2024-11-27T09:53:05.533Z] [2024-11-27 09:53:05.472264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:04.680 [2024-11-27 09:53:05.689750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.940 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.940 "name": "raid_bdev1", 00:16:04.940 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:04.940 "strip_size_kb": 0, 00:16:04.940 "state": "online", 00:16:04.940 "raid_level": "raid1", 00:16:04.941 "superblock": true, 00:16:04.941 "num_base_bdevs": 4, 00:16:04.941 "num_base_bdevs_discovered": 4, 00:16:04.941 "num_base_bdevs_operational": 4, 00:16:04.941 "process": { 00:16:04.941 "type": "rebuild", 00:16:04.941 "target": "spare", 00:16:04.941 "progress": { 00:16:04.941 "blocks": 10240, 00:16:04.941 "percent": 16 00:16:04.941 } 00:16:04.941 }, 00:16:04.941 "base_bdevs_list": [ 00:16:04.941 { 00:16:04.941 "name": "spare", 00:16:04.941 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:04.941 "is_configured": true, 00:16:04.941 "data_offset": 2048, 00:16:04.941 "data_size": 63488 00:16:04.941 }, 00:16:04.941 { 00:16:04.941 "name": "BaseBdev2", 00:16:04.941 "uuid": "b5659c25-cbe3-5115-ad9a-e62c39fdc041", 00:16:04.941 "is_configured": true, 00:16:04.941 "data_offset": 2048, 00:16:04.941 "data_size": 63488 00:16:04.941 }, 00:16:04.941 { 00:16:04.941 "name": "BaseBdev3", 00:16:04.941 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:04.941 "is_configured": true, 00:16:04.941 "data_offset": 2048, 00:16:04.941 "data_size": 63488 00:16:04.941 }, 00:16:04.941 { 00:16:04.941 "name": "BaseBdev4", 00:16:04.941 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:04.941 "is_configured": true, 00:16:04.941 "data_offset": 2048, 00:16:04.941 "data_size": 63488 00:16:04.941 } 00:16:04.941 ] 00:16:04.941 }' 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.941 09:53:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:04.941 [2024-11-27 09:53:05.977511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.941 [2024-11-27 09:53:06.053793] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:04.941 [2024-11-27 09:53:06.067222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.941 [2024-11-27 09:53:06.067297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:04.941 [2024-11-27 09:53:06.067319] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.201 [2024-11-27 09:53:06.108815] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.201 "name": "raid_bdev1", 00:16:05.201 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:05.201 "strip_size_kb": 0, 00:16:05.201 "state": "online", 00:16:05.201 "raid_level": "raid1", 00:16:05.201 "superblock": true, 00:16:05.201 "num_base_bdevs": 4, 00:16:05.201 "num_base_bdevs_discovered": 3, 00:16:05.201 "num_base_bdevs_operational": 3, 00:16:05.201 "base_bdevs_list": [ 00:16:05.201 { 00:16:05.201 "name": null, 00:16:05.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.201 "is_configured": false, 00:16:05.201 "data_offset": 0, 00:16:05.201 "data_size": 63488 00:16:05.201 }, 00:16:05.201 { 00:16:05.201 "name": "BaseBdev2", 00:16:05.201 "uuid": "b5659c25-cbe3-5115-ad9a-e62c39fdc041", 00:16:05.201 "is_configured": true, 00:16:05.201 "data_offset": 2048, 00:16:05.201 "data_size": 63488 00:16:05.201 }, 00:16:05.201 { 00:16:05.201 "name": "BaseBdev3", 00:16:05.201 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:05.201 "is_configured": true, 00:16:05.201 "data_offset": 2048, 00:16:05.201 "data_size": 63488 00:16:05.201 }, 00:16:05.201 { 00:16:05.201 "name": "BaseBdev4", 00:16:05.201 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:05.201 "is_configured": true, 00:16:05.201 "data_offset": 2048, 00:16:05.201 "data_size": 63488 00:16:05.201 } 00:16:05.201 ] 00:16:05.201 }' 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.201 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.461 124.50 IOPS, 373.50 MiB/s [2024-11-27T09:53:06.594Z] 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.461 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.720 "name": "raid_bdev1", 00:16:05.720 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:05.720 "strip_size_kb": 0, 00:16:05.720 "state": "online", 00:16:05.720 "raid_level": "raid1", 00:16:05.720 "superblock": true, 00:16:05.720 "num_base_bdevs": 4, 00:16:05.720 "num_base_bdevs_discovered": 3, 00:16:05.720 "num_base_bdevs_operational": 3, 00:16:05.720 "base_bdevs_list": [ 00:16:05.720 { 00:16:05.720 "name": null, 00:16:05.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.720 "is_configured": false, 00:16:05.720 "data_offset": 0, 00:16:05.720 "data_size": 63488 00:16:05.720 }, 00:16:05.720 { 00:16:05.720 "name": "BaseBdev2", 00:16:05.720 "uuid": "b5659c25-cbe3-5115-ad9a-e62c39fdc041", 00:16:05.720 "is_configured": true, 00:16:05.720 "data_offset": 2048, 00:16:05.720 "data_size": 63488 00:16:05.720 }, 00:16:05.720 { 00:16:05.720 "name": "BaseBdev3", 00:16:05.720 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:05.720 "is_configured": true, 00:16:05.720 "data_offset": 2048, 00:16:05.720 "data_size": 63488 00:16:05.720 }, 00:16:05.720 { 00:16:05.720 "name": "BaseBdev4", 00:16:05.720 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:05.720 "is_configured": true, 00:16:05.720 "data_offset": 2048, 00:16:05.720 "data_size": 63488 00:16:05.720 } 00:16:05.720 ] 00:16:05.720 }' 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:05.720 [2024-11-27 09:53:06.718409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.720 09:53:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:05.721 [2024-11-27 09:53:06.799672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:05.721 [2024-11-27 09:53:06.802171] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:05.980 [2024-11-27 09:53:06.919144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:05.980 [2024-11-27 09:53:06.921834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:06.240 [2024-11-27 09:53:07.162750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:06.240 [2024-11-27 09:53:07.164154] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:06.499 118.67 IOPS, 356.00 MiB/s [2024-11-27T09:53:07.632Z] [2024-11-27 09:53:07.508743] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:06.499 [2024-11-27 09:53:07.511454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:06.758 [2024-11-27 09:53:07.733016] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.758 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.758 "name": "raid_bdev1", 00:16:06.758 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:06.758 "strip_size_kb": 0, 00:16:06.758 "state": "online", 00:16:06.758 "raid_level": "raid1", 00:16:06.758 "superblock": true, 00:16:06.758 "num_base_bdevs": 4, 00:16:06.758 "num_base_bdevs_discovered": 4, 00:16:06.758 "num_base_bdevs_operational": 4, 00:16:06.758 "process": { 00:16:06.758 "type": "rebuild", 00:16:06.758 "target": "spare", 00:16:06.758 "progress": { 00:16:06.758 "blocks": 10240, 00:16:06.758 "percent": 16 00:16:06.758 } 00:16:06.758 }, 00:16:06.758 "base_bdevs_list": [ 00:16:06.758 { 00:16:06.758 "name": "spare", 00:16:06.758 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:06.758 "is_configured": true, 00:16:06.758 "data_offset": 2048, 00:16:06.758 "data_size": 63488 00:16:06.759 }, 00:16:06.759 { 00:16:06.759 "name": "BaseBdev2", 00:16:06.759 "uuid": "b5659c25-cbe3-5115-ad9a-e62c39fdc041", 00:16:06.759 "is_configured": true, 00:16:06.759 "data_offset": 2048, 00:16:06.759 "data_size": 63488 00:16:06.759 }, 00:16:06.759 { 00:16:06.759 "name": "BaseBdev3", 00:16:06.759 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:06.759 "is_configured": true, 00:16:06.759 "data_offset": 2048, 00:16:06.759 "data_size": 63488 00:16:06.759 }, 00:16:06.759 { 00:16:06.759 "name": "BaseBdev4", 00:16:06.759 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:06.759 "is_configured": true, 00:16:06.759 "data_offset": 2048, 00:16:06.759 "data_size": 63488 00:16:06.759 } 00:16:06.759 ] 00:16:06.759 }' 00:16:06.759 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.759 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.759 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:07.018 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.018 09:53:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.018 [2024-11-27 09:53:07.909632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.018 [2024-11-27 09:53:08.060734] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:07.018 [2024-11-27 09:53:08.060896] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.018 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.018 "name": "raid_bdev1", 00:16:07.018 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:07.018 "strip_size_kb": 0, 00:16:07.018 "state": "online", 00:16:07.018 "raid_level": "raid1", 00:16:07.018 "superblock": true, 00:16:07.018 "num_base_bdevs": 4, 00:16:07.018 "num_base_bdevs_discovered": 3, 00:16:07.018 "num_base_bdevs_operational": 3, 00:16:07.018 "process": { 00:16:07.018 "type": "rebuild", 00:16:07.018 "target": "spare", 00:16:07.018 "progress": { 00:16:07.018 "blocks": 12288, 00:16:07.018 "percent": 19 00:16:07.018 } 00:16:07.018 }, 00:16:07.018 "base_bdevs_list": [ 00:16:07.018 { 00:16:07.018 "name": "spare", 00:16:07.018 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:07.018 "is_configured": true, 00:16:07.019 "data_offset": 2048, 00:16:07.019 "data_size": 63488 00:16:07.019 }, 00:16:07.019 { 00:16:07.019 "name": null, 00:16:07.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.019 "is_configured": false, 00:16:07.019 "data_offset": 0, 00:16:07.019 "data_size": 63488 00:16:07.019 }, 00:16:07.019 { 00:16:07.019 "name": "BaseBdev3", 00:16:07.019 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:07.019 "is_configured": true, 00:16:07.019 "data_offset": 2048, 00:16:07.019 "data_size": 63488 00:16:07.019 }, 00:16:07.019 { 00:16:07.019 "name": "BaseBdev4", 00:16:07.019 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:07.019 "is_configured": true, 00:16:07.019 "data_offset": 2048, 00:16:07.019 "data_size": 63488 00:16:07.019 } 00:16:07.019 ] 00:16:07.019 }' 00:16:07.019 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.278 [2024-11-27 09:53:08.182224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:07.278 [2024-11-27 09:53:08.184096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.278 "name": "raid_bdev1", 00:16:07.278 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:07.278 "strip_size_kb": 0, 00:16:07.278 "state": "online", 00:16:07.278 "raid_level": "raid1", 00:16:07.278 "superblock": true, 00:16:07.278 "num_base_bdevs": 4, 00:16:07.278 "num_base_bdevs_discovered": 3, 00:16:07.278 "num_base_bdevs_operational": 3, 00:16:07.278 "process": { 00:16:07.278 "type": "rebuild", 00:16:07.278 "target": "spare", 00:16:07.278 "progress": { 00:16:07.278 "blocks": 14336, 00:16:07.278 "percent": 22 00:16:07.278 } 00:16:07.278 }, 00:16:07.278 "base_bdevs_list": [ 00:16:07.278 { 00:16:07.278 "name": "spare", 00:16:07.278 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:07.278 "is_configured": true, 00:16:07.278 "data_offset": 2048, 00:16:07.278 "data_size": 63488 00:16:07.278 }, 00:16:07.278 { 00:16:07.278 "name": null, 00:16:07.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.278 "is_configured": false, 00:16:07.278 "data_offset": 0, 00:16:07.278 "data_size": 63488 00:16:07.278 }, 00:16:07.278 { 00:16:07.278 "name": "BaseBdev3", 00:16:07.278 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:07.278 "is_configured": true, 00:16:07.278 "data_offset": 2048, 00:16:07.278 "data_size": 63488 00:16:07.278 }, 00:16:07.278 { 00:16:07.278 "name": "BaseBdev4", 00:16:07.278 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:07.278 "is_configured": true, 00:16:07.278 "data_offset": 2048, 00:16:07.278 "data_size": 63488 00:16:07.278 } 00:16:07.278 ] 00:16:07.278 }' 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.278 09:53:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.278 [2024-11-27 09:53:08.404298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:07.278 [2024-11-27 09:53:08.405315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:07.796 103.50 IOPS, 310.50 MiB/s [2024-11-27T09:53:08.929Z] [2024-11-27 09:53:08.733588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:07.796 [2024-11-27 09:53:08.734504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.365 "name": "raid_bdev1", 00:16:08.365 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:08.365 "strip_size_kb": 0, 00:16:08.365 "state": "online", 00:16:08.365 "raid_level": "raid1", 00:16:08.365 "superblock": true, 00:16:08.365 "num_base_bdevs": 4, 00:16:08.365 "num_base_bdevs_discovered": 3, 00:16:08.365 "num_base_bdevs_operational": 3, 00:16:08.365 "process": { 00:16:08.365 "type": "rebuild", 00:16:08.365 "target": "spare", 00:16:08.365 "progress": { 00:16:08.365 "blocks": 30720, 00:16:08.365 "percent": 48 00:16:08.365 } 00:16:08.365 }, 00:16:08.365 "base_bdevs_list": [ 00:16:08.365 { 00:16:08.365 "name": "spare", 00:16:08.365 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:08.365 "is_configured": true, 00:16:08.365 "data_offset": 2048, 00:16:08.365 "data_size": 63488 00:16:08.365 }, 00:16:08.365 { 00:16:08.365 "name": null, 00:16:08.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.365 "is_configured": false, 00:16:08.365 "data_offset": 0, 00:16:08.365 "data_size": 63488 00:16:08.365 }, 00:16:08.365 { 00:16:08.365 "name": "BaseBdev3", 00:16:08.365 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:08.365 "is_configured": true, 00:16:08.365 "data_offset": 2048, 00:16:08.365 "data_size": 63488 00:16:08.365 }, 00:16:08.365 { 00:16:08.365 "name": "BaseBdev4", 00:16:08.365 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:08.365 "is_configured": true, 00:16:08.365 "data_offset": 2048, 00:16:08.365 "data_size": 63488 00:16:08.365 } 00:16:08.365 ] 00:16:08.365 }' 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.365 [2024-11-27 09:53:09.436593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:08.365 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.365 97.00 IOPS, 291.00 MiB/s [2024-11-27T09:53:09.498Z] 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.624 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.624 09:53:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.624 [2024-11-27 09:53:09.649886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:09.192 [2024-11-27 09:53:10.238250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:16:09.451 [2024-11-27 09:53:10.455431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:09.451 [2024-11-27 09:53:10.456120] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:09.451 87.67 IOPS, 263.00 MiB/s [2024-11-27T09:53:10.584Z] 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.451 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.451 "name": "raid_bdev1", 00:16:09.452 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:09.452 "strip_size_kb": 0, 00:16:09.452 "state": "online", 00:16:09.452 "raid_level": "raid1", 00:16:09.452 "superblock": true, 00:16:09.452 "num_base_bdevs": 4, 00:16:09.452 "num_base_bdevs_discovered": 3, 00:16:09.452 "num_base_bdevs_operational": 3, 00:16:09.452 "process": { 00:16:09.452 "type": "rebuild", 00:16:09.452 "target": "spare", 00:16:09.452 "progress": { 00:16:09.452 "blocks": 47104, 00:16:09.452 "percent": 74 00:16:09.452 } 00:16:09.452 }, 00:16:09.452 "base_bdevs_list": [ 00:16:09.452 { 00:16:09.452 "name": "spare", 00:16:09.452 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:09.452 "is_configured": true, 00:16:09.452 "data_offset": 2048, 00:16:09.452 "data_size": 63488 00:16:09.452 }, 00:16:09.452 { 00:16:09.452 "name": null, 00:16:09.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.452 "is_configured": false, 00:16:09.452 "data_offset": 0, 00:16:09.452 "data_size": 63488 00:16:09.452 }, 00:16:09.452 { 00:16:09.452 "name": "BaseBdev3", 00:16:09.452 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:09.452 "is_configured": true, 00:16:09.452 "data_offset": 2048, 00:16:09.452 "data_size": 63488 00:16:09.452 }, 00:16:09.452 { 00:16:09.452 "name": "BaseBdev4", 00:16:09.452 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:09.452 "is_configured": true, 00:16:09.452 "data_offset": 2048, 00:16:09.452 "data_size": 63488 00:16:09.452 } 00:16:09.452 ] 00:16:09.452 }' 00:16:09.452 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.712 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.712 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.712 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.712 09:53:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.712 [2024-11-27 09:53:10.779675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:09.971 [2024-11-27 09:53:11.000164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:10.231 [2024-11-27 09:53:11.331838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:16:10.490 78.57 IOPS, 235.71 MiB/s [2024-11-27T09:53:11.623Z] [2024-11-27 09:53:11.553620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.749 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.750 "name": "raid_bdev1", 00:16:10.750 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:10.750 "strip_size_kb": 0, 00:16:10.750 "state": "online", 00:16:10.750 "raid_level": "raid1", 00:16:10.750 "superblock": true, 00:16:10.750 "num_base_bdevs": 4, 00:16:10.750 "num_base_bdevs_discovered": 3, 00:16:10.750 "num_base_bdevs_operational": 3, 00:16:10.750 "process": { 00:16:10.750 "type": "rebuild", 00:16:10.750 "target": "spare", 00:16:10.750 "progress": { 00:16:10.750 "blocks": 59392, 00:16:10.750 "percent": 93 00:16:10.750 } 00:16:10.750 }, 00:16:10.750 "base_bdevs_list": [ 00:16:10.750 { 00:16:10.750 "name": "spare", 00:16:10.750 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:10.750 "is_configured": true, 00:16:10.750 "data_offset": 2048, 00:16:10.750 "data_size": 63488 00:16:10.750 }, 00:16:10.750 { 00:16:10.750 "name": null, 00:16:10.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.750 "is_configured": false, 00:16:10.750 "data_offset": 0, 00:16:10.750 "data_size": 63488 00:16:10.750 }, 00:16:10.750 { 00:16:10.750 "name": "BaseBdev3", 00:16:10.750 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:10.750 "is_configured": true, 00:16:10.750 "data_offset": 2048, 00:16:10.750 "data_size": 63488 00:16:10.750 }, 00:16:10.750 { 00:16:10.750 "name": "BaseBdev4", 00:16:10.750 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:10.750 "is_configured": true, 00:16:10.750 "data_offset": 2048, 00:16:10.750 "data_size": 63488 00:16:10.750 } 00:16:10.750 ] 00:16:10.750 }' 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.750 09:53:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.009 [2024-11-27 09:53:11.894956] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:11.009 [2024-11-27 09:53:12.000122] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:11.010 [2024-11-27 09:53:12.004842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.838 73.75 IOPS, 221.25 MiB/s [2024-11-27T09:53:12.971Z] 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.838 "name": "raid_bdev1", 00:16:11.838 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:11.838 "strip_size_kb": 0, 00:16:11.838 "state": "online", 00:16:11.838 "raid_level": "raid1", 00:16:11.838 "superblock": true, 00:16:11.838 "num_base_bdevs": 4, 00:16:11.838 "num_base_bdevs_discovered": 3, 00:16:11.838 "num_base_bdevs_operational": 3, 00:16:11.838 "base_bdevs_list": [ 00:16:11.838 { 00:16:11.838 "name": "spare", 00:16:11.838 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:11.838 "is_configured": true, 00:16:11.838 "data_offset": 2048, 00:16:11.838 "data_size": 63488 00:16:11.838 }, 00:16:11.838 { 00:16:11.838 "name": null, 00:16:11.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.838 "is_configured": false, 00:16:11.838 "data_offset": 0, 00:16:11.838 "data_size": 63488 00:16:11.838 }, 00:16:11.838 { 00:16:11.838 "name": "BaseBdev3", 00:16:11.838 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:11.838 "is_configured": true, 00:16:11.838 "data_offset": 2048, 00:16:11.838 "data_size": 63488 00:16:11.838 }, 00:16:11.838 { 00:16:11.838 "name": "BaseBdev4", 00:16:11.838 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:11.838 "is_configured": true, 00:16:11.838 "data_offset": 2048, 00:16:11.838 "data_size": 63488 00:16:11.838 } 00:16:11.838 ] 00:16:11.838 }' 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.838 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.839 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.098 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.098 "name": "raid_bdev1", 00:16:12.098 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:12.098 "strip_size_kb": 0, 00:16:12.098 "state": "online", 00:16:12.098 "raid_level": "raid1", 00:16:12.098 "superblock": true, 00:16:12.098 "num_base_bdevs": 4, 00:16:12.098 "num_base_bdevs_discovered": 3, 00:16:12.098 "num_base_bdevs_operational": 3, 00:16:12.098 "base_bdevs_list": [ 00:16:12.098 { 00:16:12.098 "name": "spare", 00:16:12.098 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:12.098 "is_configured": true, 00:16:12.098 "data_offset": 2048, 00:16:12.098 "data_size": 63488 00:16:12.098 }, 00:16:12.098 { 00:16:12.098 "name": null, 00:16:12.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.098 "is_configured": false, 00:16:12.098 "data_offset": 0, 00:16:12.098 "data_size": 63488 00:16:12.098 }, 00:16:12.098 { 00:16:12.098 "name": "BaseBdev3", 00:16:12.098 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:12.098 "is_configured": true, 00:16:12.098 "data_offset": 2048, 00:16:12.098 "data_size": 63488 00:16:12.098 }, 00:16:12.098 { 00:16:12.098 "name": "BaseBdev4", 00:16:12.098 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:12.098 "is_configured": true, 00:16:12.098 "data_offset": 2048, 00:16:12.098 "data_size": 63488 00:16:12.098 } 00:16:12.098 ] 00:16:12.098 }' 00:16:12.098 09:53:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.098 "name": "raid_bdev1", 00:16:12.098 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:12.098 "strip_size_kb": 0, 00:16:12.098 "state": "online", 00:16:12.098 "raid_level": "raid1", 00:16:12.098 "superblock": true, 00:16:12.098 "num_base_bdevs": 4, 00:16:12.098 "num_base_bdevs_discovered": 3, 00:16:12.098 "num_base_bdevs_operational": 3, 00:16:12.098 "base_bdevs_list": [ 00:16:12.098 { 00:16:12.098 "name": "spare", 00:16:12.098 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:12.098 "is_configured": true, 00:16:12.098 "data_offset": 2048, 00:16:12.098 "data_size": 63488 00:16:12.098 }, 00:16:12.098 { 00:16:12.098 "name": null, 00:16:12.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.098 "is_configured": false, 00:16:12.098 "data_offset": 0, 00:16:12.098 "data_size": 63488 00:16:12.098 }, 00:16:12.098 { 00:16:12.098 "name": "BaseBdev3", 00:16:12.098 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:12.098 "is_configured": true, 00:16:12.098 "data_offset": 2048, 00:16:12.098 "data_size": 63488 00:16:12.098 }, 00:16:12.098 { 00:16:12.098 "name": "BaseBdev4", 00:16:12.098 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:12.098 "is_configured": true, 00:16:12.098 "data_offset": 2048, 00:16:12.098 "data_size": 63488 00:16:12.098 } 00:16:12.098 ] 00:16:12.098 }' 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.098 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.617 69.11 IOPS, 207.33 MiB/s [2024-11-27T09:53:13.750Z] 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:12.617 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.617 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.617 [2024-11-27 09:53:13.507626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:12.617 [2024-11-27 09:53:13.507724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:12.617 00:16:12.617 Latency(us) 00:16:12.617 [2024-11-27T09:53:13.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.617 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:12.617 raid_bdev1 : 9.12 68.45 205.36 0.00 0.00 20699.66 355.94 115847.04 00:16:12.617 [2024-11-27T09:53:13.750Z] =================================================================================================================== 00:16:12.617 [2024-11-27T09:53:13.750Z] Total : 68.45 205.36 0.00 0.00 20699.66 355.94 115847.04 00:16:12.617 [2024-11-27 09:53:13.582028] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:12.617 [2024-11-27 09:53:13.582208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.618 [2024-11-27 09:53:13.582343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:12.618 [2024-11-27 09:53:13.582406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:12.618 { 00:16:12.618 "results": [ 00:16:12.618 { 00:16:12.618 "job": "raid_bdev1", 00:16:12.618 "core_mask": "0x1", 00:16:12.618 "workload": "randrw", 00:16:12.618 "percentage": 50, 00:16:12.618 "status": "finished", 00:16:12.618 "queue_depth": 2, 00:16:12.618 "io_size": 3145728, 00:16:12.618 "runtime": 9.115919, 00:16:12.618 "iops": 68.45168325870381, 00:16:12.618 "mibps": 205.35504977611143, 00:16:12.618 "io_failed": 0, 00:16:12.618 "io_timeout": 0, 00:16:12.618 "avg_latency_us": 20699.662389430076, 00:16:12.618 "min_latency_us": 355.9406113537118, 00:16:12.618 "max_latency_us": 115847.04279475982 00:16:12.618 } 00:16:12.618 ], 00:16:12.618 "core_count": 1 00:16:12.618 } 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.618 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:12.876 /dev/nbd0 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:12.876 1+0 records in 00:16:12.876 1+0 records out 00:16:12.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436361 s, 9.4 MB/s 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.876 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:12.877 09:53:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:13.135 /dev/nbd1 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.135 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.135 1+0 records in 00:16:13.135 1+0 records out 00:16:13.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346584 s, 11.8 MB/s 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:13.136 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.394 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:13.653 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:13.654 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:13.654 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:13.913 /dev/nbd1 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.913 1+0 records in 00:16:13.913 1+0 records out 00:16:13.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553818 s, 7.4 MB/s 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.913 09:53:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:14.173 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.431 [2024-11-27 09:53:15.388187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:14.431 [2024-11-27 09:53:15.388331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.431 [2024-11-27 09:53:15.388379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:14.431 [2024-11-27 09:53:15.388421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.431 [2024-11-27 09:53:15.391101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.431 [2024-11-27 09:53:15.391143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:14.431 [2024-11-27 09:53:15.391263] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:14.431 [2024-11-27 09:53:15.391334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.431 [2024-11-27 09:53:15.391514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:14.431 [2024-11-27 09:53:15.391627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.431 spare 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.431 [2024-11-27 09:53:15.491538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:14.431 [2024-11-27 09:53:15.491661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:14.431 [2024-11-27 09:53:15.492090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:14.431 [2024-11-27 09:53:15.492351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:14.431 [2024-11-27 09:53:15.492401] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:14.431 [2024-11-27 09:53:15.492701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.431 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.432 "name": "raid_bdev1", 00:16:14.432 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:14.432 "strip_size_kb": 0, 00:16:14.432 "state": "online", 00:16:14.432 "raid_level": "raid1", 00:16:14.432 "superblock": true, 00:16:14.432 "num_base_bdevs": 4, 00:16:14.432 "num_base_bdevs_discovered": 3, 00:16:14.432 "num_base_bdevs_operational": 3, 00:16:14.432 "base_bdevs_list": [ 00:16:14.432 { 00:16:14.432 "name": "spare", 00:16:14.432 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:14.432 "is_configured": true, 00:16:14.432 "data_offset": 2048, 00:16:14.432 "data_size": 63488 00:16:14.432 }, 00:16:14.432 { 00:16:14.432 "name": null, 00:16:14.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.432 "is_configured": false, 00:16:14.432 "data_offset": 2048, 00:16:14.432 "data_size": 63488 00:16:14.432 }, 00:16:14.432 { 00:16:14.432 "name": "BaseBdev3", 00:16:14.432 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:14.432 "is_configured": true, 00:16:14.432 "data_offset": 2048, 00:16:14.432 "data_size": 63488 00:16:14.432 }, 00:16:14.432 { 00:16:14.432 "name": "BaseBdev4", 00:16:14.432 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:14.432 "is_configured": true, 00:16:14.432 "data_offset": 2048, 00:16:14.432 "data_size": 63488 00:16:14.432 } 00:16:14.432 ] 00:16:14.432 }' 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.432 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.000 "name": "raid_bdev1", 00:16:15.000 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:15.000 "strip_size_kb": 0, 00:16:15.000 "state": "online", 00:16:15.000 "raid_level": "raid1", 00:16:15.000 "superblock": true, 00:16:15.000 "num_base_bdevs": 4, 00:16:15.000 "num_base_bdevs_discovered": 3, 00:16:15.000 "num_base_bdevs_operational": 3, 00:16:15.000 "base_bdevs_list": [ 00:16:15.000 { 00:16:15.000 "name": "spare", 00:16:15.000 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:15.000 "is_configured": true, 00:16:15.000 "data_offset": 2048, 00:16:15.000 "data_size": 63488 00:16:15.000 }, 00:16:15.000 { 00:16:15.000 "name": null, 00:16:15.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.000 "is_configured": false, 00:16:15.000 "data_offset": 2048, 00:16:15.000 "data_size": 63488 00:16:15.000 }, 00:16:15.000 { 00:16:15.000 "name": "BaseBdev3", 00:16:15.000 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:15.000 "is_configured": true, 00:16:15.000 "data_offset": 2048, 00:16:15.000 "data_size": 63488 00:16:15.000 }, 00:16:15.000 { 00:16:15.000 "name": "BaseBdev4", 00:16:15.000 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:15.000 "is_configured": true, 00:16:15.000 "data_offset": 2048, 00:16:15.000 "data_size": 63488 00:16:15.000 } 00:16:15.000 ] 00:16:15.000 }' 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:15.000 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.260 [2024-11-27 09:53:16.151670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.260 "name": "raid_bdev1", 00:16:15.260 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:15.260 "strip_size_kb": 0, 00:16:15.260 "state": "online", 00:16:15.260 "raid_level": "raid1", 00:16:15.260 "superblock": true, 00:16:15.260 "num_base_bdevs": 4, 00:16:15.260 "num_base_bdevs_discovered": 2, 00:16:15.260 "num_base_bdevs_operational": 2, 00:16:15.260 "base_bdevs_list": [ 00:16:15.260 { 00:16:15.260 "name": null, 00:16:15.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.260 "is_configured": false, 00:16:15.260 "data_offset": 0, 00:16:15.260 "data_size": 63488 00:16:15.260 }, 00:16:15.260 { 00:16:15.260 "name": null, 00:16:15.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.260 "is_configured": false, 00:16:15.260 "data_offset": 2048, 00:16:15.260 "data_size": 63488 00:16:15.260 }, 00:16:15.260 { 00:16:15.260 "name": "BaseBdev3", 00:16:15.260 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:15.260 "is_configured": true, 00:16:15.260 "data_offset": 2048, 00:16:15.260 "data_size": 63488 00:16:15.260 }, 00:16:15.260 { 00:16:15.260 "name": "BaseBdev4", 00:16:15.260 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:15.260 "is_configured": true, 00:16:15.260 "data_offset": 2048, 00:16:15.260 "data_size": 63488 00:16:15.260 } 00:16:15.260 ] 00:16:15.260 }' 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.260 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.519 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:15.519 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.519 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:15.519 [2024-11-27 09:53:16.595042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.519 [2024-11-27 09:53:16.595306] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:15.519 [2024-11-27 09:53:16.595327] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:15.519 [2024-11-27 09:53:16.595373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:15.519 [2024-11-27 09:53:16.610319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:15.519 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.519 09:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:15.519 [2024-11-27 09:53:16.612582] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.899 "name": "raid_bdev1", 00:16:16.899 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:16.899 "strip_size_kb": 0, 00:16:16.899 "state": "online", 00:16:16.899 "raid_level": "raid1", 00:16:16.899 "superblock": true, 00:16:16.899 "num_base_bdevs": 4, 00:16:16.899 "num_base_bdevs_discovered": 3, 00:16:16.899 "num_base_bdevs_operational": 3, 00:16:16.899 "process": { 00:16:16.899 "type": "rebuild", 00:16:16.899 "target": "spare", 00:16:16.899 "progress": { 00:16:16.899 "blocks": 20480, 00:16:16.899 "percent": 32 00:16:16.899 } 00:16:16.899 }, 00:16:16.899 "base_bdevs_list": [ 00:16:16.899 { 00:16:16.899 "name": "spare", 00:16:16.899 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:16.899 "is_configured": true, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 }, 00:16:16.899 { 00:16:16.899 "name": null, 00:16:16.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.899 "is_configured": false, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 }, 00:16:16.899 { 00:16:16.899 "name": "BaseBdev3", 00:16:16.899 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:16.899 "is_configured": true, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 }, 00:16:16.899 { 00:16:16.899 "name": "BaseBdev4", 00:16:16.899 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:16.899 "is_configured": true, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 } 00:16:16.899 ] 00:16:16.899 }' 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.899 [2024-11-27 09:53:17.780720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.899 [2024-11-27 09:53:17.822324] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:16.899 [2024-11-27 09:53:17.822409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.899 [2024-11-27 09:53:17.822429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:16.899 [2024-11-27 09:53:17.822442] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.899 "name": "raid_bdev1", 00:16:16.899 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:16.899 "strip_size_kb": 0, 00:16:16.899 "state": "online", 00:16:16.899 "raid_level": "raid1", 00:16:16.899 "superblock": true, 00:16:16.899 "num_base_bdevs": 4, 00:16:16.899 "num_base_bdevs_discovered": 2, 00:16:16.899 "num_base_bdevs_operational": 2, 00:16:16.899 "base_bdevs_list": [ 00:16:16.899 { 00:16:16.899 "name": null, 00:16:16.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.899 "is_configured": false, 00:16:16.899 "data_offset": 0, 00:16:16.899 "data_size": 63488 00:16:16.899 }, 00:16:16.899 { 00:16:16.899 "name": null, 00:16:16.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.899 "is_configured": false, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 }, 00:16:16.899 { 00:16:16.899 "name": "BaseBdev3", 00:16:16.899 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:16.899 "is_configured": true, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 }, 00:16:16.899 { 00:16:16.899 "name": "BaseBdev4", 00:16:16.899 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:16.899 "is_configured": true, 00:16:16.899 "data_offset": 2048, 00:16:16.899 "data_size": 63488 00:16:16.899 } 00:16:16.899 ] 00:16:16.899 }' 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.899 09:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.468 09:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:17.468 09:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.468 09:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.468 [2024-11-27 09:53:18.325374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:17.468 [2024-11-27 09:53:18.325529] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.468 [2024-11-27 09:53:18.325601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:17.468 [2024-11-27 09:53:18.325643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.468 [2024-11-27 09:53:18.326339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.468 [2024-11-27 09:53:18.326423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:17.468 [2024-11-27 09:53:18.326591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:17.468 [2024-11-27 09:53:18.326643] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:17.468 [2024-11-27 09:53:18.326700] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:17.468 [2024-11-27 09:53:18.326765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:17.468 [2024-11-27 09:53:18.342913] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:17.468 spare 00:16:17.468 09:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.469 09:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:17.469 [2024-11-27 09:53:18.345289] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.404 "name": "raid_bdev1", 00:16:18.404 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:18.404 "strip_size_kb": 0, 00:16:18.404 "state": "online", 00:16:18.404 "raid_level": "raid1", 00:16:18.404 "superblock": true, 00:16:18.404 "num_base_bdevs": 4, 00:16:18.404 "num_base_bdevs_discovered": 3, 00:16:18.404 "num_base_bdevs_operational": 3, 00:16:18.404 "process": { 00:16:18.404 "type": "rebuild", 00:16:18.404 "target": "spare", 00:16:18.404 "progress": { 00:16:18.404 "blocks": 20480, 00:16:18.404 "percent": 32 00:16:18.404 } 00:16:18.404 }, 00:16:18.404 "base_bdevs_list": [ 00:16:18.404 { 00:16:18.404 "name": "spare", 00:16:18.404 "uuid": "567cb595-a362-528a-ab67-5c2f7e42d667", 00:16:18.404 "is_configured": true, 00:16:18.404 "data_offset": 2048, 00:16:18.404 "data_size": 63488 00:16:18.404 }, 00:16:18.404 { 00:16:18.404 "name": null, 00:16:18.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.404 "is_configured": false, 00:16:18.404 "data_offset": 2048, 00:16:18.404 "data_size": 63488 00:16:18.404 }, 00:16:18.404 { 00:16:18.404 "name": "BaseBdev3", 00:16:18.404 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:18.404 "is_configured": true, 00:16:18.404 "data_offset": 2048, 00:16:18.404 "data_size": 63488 00:16:18.404 }, 00:16:18.404 { 00:16:18.404 "name": "BaseBdev4", 00:16:18.404 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:18.404 "is_configured": true, 00:16:18.404 "data_offset": 2048, 00:16:18.404 "data_size": 63488 00:16:18.404 } 00:16:18.404 ] 00:16:18.404 }' 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.404 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.404 [2024-11-27 09:53:19.509402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.663 [2024-11-27 09:53:19.556273] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:18.663 [2024-11-27 09:53:19.556526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.663 [2024-11-27 09:53:19.556557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:18.663 [2024-11-27 09:53:19.556568] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.663 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.663 "name": "raid_bdev1", 00:16:18.663 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:18.663 "strip_size_kb": 0, 00:16:18.663 "state": "online", 00:16:18.663 "raid_level": "raid1", 00:16:18.663 "superblock": true, 00:16:18.663 "num_base_bdevs": 4, 00:16:18.663 "num_base_bdevs_discovered": 2, 00:16:18.664 "num_base_bdevs_operational": 2, 00:16:18.664 "base_bdevs_list": [ 00:16:18.664 { 00:16:18.664 "name": null, 00:16:18.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.664 "is_configured": false, 00:16:18.664 "data_offset": 0, 00:16:18.664 "data_size": 63488 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "name": null, 00:16:18.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.664 "is_configured": false, 00:16:18.664 "data_offset": 2048, 00:16:18.664 "data_size": 63488 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "name": "BaseBdev3", 00:16:18.664 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:18.664 "is_configured": true, 00:16:18.664 "data_offset": 2048, 00:16:18.664 "data_size": 63488 00:16:18.664 }, 00:16:18.664 { 00:16:18.664 "name": "BaseBdev4", 00:16:18.664 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:18.664 "is_configured": true, 00:16:18.664 "data_offset": 2048, 00:16:18.664 "data_size": 63488 00:16:18.664 } 00:16:18.664 ] 00:16:18.664 }' 00:16:18.664 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.664 09:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.922 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.180 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.180 "name": "raid_bdev1", 00:16:19.180 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:19.180 "strip_size_kb": 0, 00:16:19.180 "state": "online", 00:16:19.180 "raid_level": "raid1", 00:16:19.180 "superblock": true, 00:16:19.180 "num_base_bdevs": 4, 00:16:19.180 "num_base_bdevs_discovered": 2, 00:16:19.180 "num_base_bdevs_operational": 2, 00:16:19.180 "base_bdevs_list": [ 00:16:19.180 { 00:16:19.180 "name": null, 00:16:19.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.180 "is_configured": false, 00:16:19.180 "data_offset": 0, 00:16:19.180 "data_size": 63488 00:16:19.180 }, 00:16:19.180 { 00:16:19.180 "name": null, 00:16:19.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.180 "is_configured": false, 00:16:19.180 "data_offset": 2048, 00:16:19.180 "data_size": 63488 00:16:19.180 }, 00:16:19.181 { 00:16:19.181 "name": "BaseBdev3", 00:16:19.181 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:19.181 "is_configured": true, 00:16:19.181 "data_offset": 2048, 00:16:19.181 "data_size": 63488 00:16:19.181 }, 00:16:19.181 { 00:16:19.181 "name": "BaseBdev4", 00:16:19.181 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:19.181 "is_configured": true, 00:16:19.181 "data_offset": 2048, 00:16:19.181 "data_size": 63488 00:16:19.181 } 00:16:19.181 ] 00:16:19.181 }' 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.181 [2024-11-27 09:53:20.164632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:19.181 [2024-11-27 09:53:20.164773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.181 [2024-11-27 09:53:20.164810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:19.181 [2024-11-27 09:53:20.164823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.181 [2024-11-27 09:53:20.165482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.181 [2024-11-27 09:53:20.165505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:19.181 [2024-11-27 09:53:20.165624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:19.181 [2024-11-27 09:53:20.165643] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:19.181 [2024-11-27 09:53:20.165660] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:19.181 [2024-11-27 09:53:20.165674] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:19.181 BaseBdev1 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.181 09:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.119 "name": "raid_bdev1", 00:16:20.119 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:20.119 "strip_size_kb": 0, 00:16:20.119 "state": "online", 00:16:20.119 "raid_level": "raid1", 00:16:20.119 "superblock": true, 00:16:20.119 "num_base_bdevs": 4, 00:16:20.119 "num_base_bdevs_discovered": 2, 00:16:20.119 "num_base_bdevs_operational": 2, 00:16:20.119 "base_bdevs_list": [ 00:16:20.119 { 00:16:20.119 "name": null, 00:16:20.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.119 "is_configured": false, 00:16:20.119 "data_offset": 0, 00:16:20.119 "data_size": 63488 00:16:20.119 }, 00:16:20.119 { 00:16:20.119 "name": null, 00:16:20.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.119 "is_configured": false, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 }, 00:16:20.119 { 00:16:20.119 "name": "BaseBdev3", 00:16:20.119 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:20.119 "is_configured": true, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 }, 00:16:20.119 { 00:16:20.119 "name": "BaseBdev4", 00:16:20.119 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:20.119 "is_configured": true, 00:16:20.119 "data_offset": 2048, 00:16:20.119 "data_size": 63488 00:16:20.119 } 00:16:20.119 ] 00:16:20.119 }' 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.119 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.687 "name": "raid_bdev1", 00:16:20.687 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:20.687 "strip_size_kb": 0, 00:16:20.687 "state": "online", 00:16:20.687 "raid_level": "raid1", 00:16:20.687 "superblock": true, 00:16:20.687 "num_base_bdevs": 4, 00:16:20.687 "num_base_bdevs_discovered": 2, 00:16:20.687 "num_base_bdevs_operational": 2, 00:16:20.687 "base_bdevs_list": [ 00:16:20.687 { 00:16:20.687 "name": null, 00:16:20.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.687 "is_configured": false, 00:16:20.687 "data_offset": 0, 00:16:20.687 "data_size": 63488 00:16:20.687 }, 00:16:20.687 { 00:16:20.687 "name": null, 00:16:20.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.687 "is_configured": false, 00:16:20.687 "data_offset": 2048, 00:16:20.687 "data_size": 63488 00:16:20.687 }, 00:16:20.687 { 00:16:20.687 "name": "BaseBdev3", 00:16:20.687 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:20.687 "is_configured": true, 00:16:20.687 "data_offset": 2048, 00:16:20.687 "data_size": 63488 00:16:20.687 }, 00:16:20.687 { 00:16:20.687 "name": "BaseBdev4", 00:16:20.687 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:20.687 "is_configured": true, 00:16:20.687 "data_offset": 2048, 00:16:20.687 "data_size": 63488 00:16:20.687 } 00:16:20.687 ] 00:16:20.687 }' 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.687 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.687 [2024-11-27 09:53:21.722323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.688 [2024-11-27 09:53:21.722626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:20.688 [2024-11-27 09:53:21.722697] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:20.688 request: 00:16:20.688 { 00:16:20.688 "base_bdev": "BaseBdev1", 00:16:20.688 "raid_bdev": "raid_bdev1", 00:16:20.688 "method": "bdev_raid_add_base_bdev", 00:16:20.688 "req_id": 1 00:16:20.688 } 00:16:20.688 Got JSON-RPC error response 00:16:20.688 response: 00:16:20.688 { 00:16:20.688 "code": -22, 00:16:20.688 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:20.688 } 00:16:20.688 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:20.688 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:20.688 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.688 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.688 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.688 09:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.626 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.885 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.885 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.885 "name": "raid_bdev1", 00:16:21.885 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:21.885 "strip_size_kb": 0, 00:16:21.885 "state": "online", 00:16:21.885 "raid_level": "raid1", 00:16:21.885 "superblock": true, 00:16:21.885 "num_base_bdevs": 4, 00:16:21.885 "num_base_bdevs_discovered": 2, 00:16:21.885 "num_base_bdevs_operational": 2, 00:16:21.885 "base_bdevs_list": [ 00:16:21.885 { 00:16:21.885 "name": null, 00:16:21.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.885 "is_configured": false, 00:16:21.885 "data_offset": 0, 00:16:21.885 "data_size": 63488 00:16:21.885 }, 00:16:21.885 { 00:16:21.885 "name": null, 00:16:21.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.885 "is_configured": false, 00:16:21.885 "data_offset": 2048, 00:16:21.885 "data_size": 63488 00:16:21.885 }, 00:16:21.885 { 00:16:21.885 "name": "BaseBdev3", 00:16:21.885 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:21.885 "is_configured": true, 00:16:21.885 "data_offset": 2048, 00:16:21.885 "data_size": 63488 00:16:21.885 }, 00:16:21.885 { 00:16:21.885 "name": "BaseBdev4", 00:16:21.885 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:21.885 "is_configured": true, 00:16:21.886 "data_offset": 2048, 00:16:21.886 "data_size": 63488 00:16:21.886 } 00:16:21.886 ] 00:16:21.886 }' 00:16:21.886 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.886 09:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.145 "name": "raid_bdev1", 00:16:22.145 "uuid": "07a5498e-bf59-406b-9efe-f54045a6aa4b", 00:16:22.145 "strip_size_kb": 0, 00:16:22.145 "state": "online", 00:16:22.145 "raid_level": "raid1", 00:16:22.145 "superblock": true, 00:16:22.145 "num_base_bdevs": 4, 00:16:22.145 "num_base_bdevs_discovered": 2, 00:16:22.145 "num_base_bdevs_operational": 2, 00:16:22.145 "base_bdevs_list": [ 00:16:22.145 { 00:16:22.145 "name": null, 00:16:22.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.145 "is_configured": false, 00:16:22.145 "data_offset": 0, 00:16:22.145 "data_size": 63488 00:16:22.145 }, 00:16:22.145 { 00:16:22.145 "name": null, 00:16:22.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.145 "is_configured": false, 00:16:22.145 "data_offset": 2048, 00:16:22.145 "data_size": 63488 00:16:22.145 }, 00:16:22.145 { 00:16:22.145 "name": "BaseBdev3", 00:16:22.145 "uuid": "f8e0db1c-344f-570c-a9be-7cd840d0e160", 00:16:22.145 "is_configured": true, 00:16:22.145 "data_offset": 2048, 00:16:22.145 "data_size": 63488 00:16:22.145 }, 00:16:22.145 { 00:16:22.145 "name": "BaseBdev4", 00:16:22.145 "uuid": "5dc81b38-6857-5eaf-a5c9-a77848fe3366", 00:16:22.145 "is_configured": true, 00:16:22.145 "data_offset": 2048, 00:16:22.145 "data_size": 63488 00:16:22.145 } 00:16:22.145 ] 00:16:22.145 }' 00:16:22.145 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79459 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79459 ']' 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79459 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79459 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.405 killing process with pid 79459 00:16:22.405 Received shutdown signal, test time was about 18.971300 seconds 00:16:22.405 00:16:22.405 Latency(us) 00:16:22.405 [2024-11-27T09:53:23.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.405 [2024-11-27T09:53:23.538Z] =================================================================================================================== 00:16:22.405 [2024-11-27T09:53:23.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79459' 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79459 00:16:22.405 [2024-11-27 09:53:23.395850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.405 09:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79459 00:16:22.405 [2024-11-27 09:53:23.396067] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:22.405 [2024-11-27 09:53:23.396158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:22.405 [2024-11-27 09:53:23.396181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:22.974 [2024-11-27 09:53:23.840222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.384 09:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:24.384 00:16:24.384 real 0m22.646s 00:16:24.384 user 0m29.050s 00:16:24.384 sys 0m2.914s 00:16:24.384 09:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.384 09:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:24.384 ************************************ 00:16:24.384 END TEST raid_rebuild_test_sb_io 00:16:24.384 ************************************ 00:16:24.384 09:53:25 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:24.384 09:53:25 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:24.384 09:53:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:24.384 09:53:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:24.384 09:53:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.384 ************************************ 00:16:24.384 START TEST raid5f_state_function_test 00:16:24.384 ************************************ 00:16:24.384 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:24.384 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80200 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80200' 00:16:24.385 Process raid pid: 80200 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80200 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80200 ']' 00:16:24.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.385 09:53:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.385 [2024-11-27 09:53:25.314953] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:16:24.385 [2024-11-27 09:53:25.315130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.385 [2024-11-27 09:53:25.497142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.644 [2024-11-27 09:53:25.642462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.903 [2024-11-27 09:53:25.885449] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.903 [2024-11-27 09:53:25.885658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 [2024-11-27 09:53:26.153167] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.163 [2024-11-27 09:53:26.153313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.163 [2024-11-27 09:53:26.153371] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.163 [2024-11-27 09:53:26.153402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.163 [2024-11-27 09:53:26.153427] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.163 [2024-11-27 09:53:26.153456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.163 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.163 "name": "Existed_Raid", 00:16:25.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.163 "strip_size_kb": 64, 00:16:25.163 "state": "configuring", 00:16:25.163 "raid_level": "raid5f", 00:16:25.163 "superblock": false, 00:16:25.163 "num_base_bdevs": 3, 00:16:25.163 "num_base_bdevs_discovered": 0, 00:16:25.163 "num_base_bdevs_operational": 3, 00:16:25.163 "base_bdevs_list": [ 00:16:25.163 { 00:16:25.163 "name": "BaseBdev1", 00:16:25.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.163 "is_configured": false, 00:16:25.163 "data_offset": 0, 00:16:25.163 "data_size": 0 00:16:25.163 }, 00:16:25.163 { 00:16:25.163 "name": "BaseBdev2", 00:16:25.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.163 "is_configured": false, 00:16:25.163 "data_offset": 0, 00:16:25.163 "data_size": 0 00:16:25.163 }, 00:16:25.163 { 00:16:25.163 "name": "BaseBdev3", 00:16:25.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.163 "is_configured": false, 00:16:25.163 "data_offset": 0, 00:16:25.164 "data_size": 0 00:16:25.164 } 00:16:25.164 ] 00:16:25.164 }' 00:16:25.164 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.164 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.734 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:25.734 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.734 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.734 [2024-11-27 09:53:26.612296] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:25.734 [2024-11-27 09:53:26.612415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:25.734 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.735 [2024-11-27 09:53:26.624252] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:25.735 [2024-11-27 09:53:26.624353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:25.735 [2024-11-27 09:53:26.624387] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:25.735 [2024-11-27 09:53:26.624415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:25.735 [2024-11-27 09:53:26.624437] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:25.735 [2024-11-27 09:53:26.624487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.735 [2024-11-27 09:53:26.678396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.735 BaseBdev1 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.735 [ 00:16:25.735 { 00:16:25.735 "name": "BaseBdev1", 00:16:25.735 "aliases": [ 00:16:25.735 "ca01687f-2820-43c3-bbaf-cec34404a3fb" 00:16:25.735 ], 00:16:25.735 "product_name": "Malloc disk", 00:16:25.735 "block_size": 512, 00:16:25.735 "num_blocks": 65536, 00:16:25.735 "uuid": "ca01687f-2820-43c3-bbaf-cec34404a3fb", 00:16:25.735 "assigned_rate_limits": { 00:16:25.735 "rw_ios_per_sec": 0, 00:16:25.735 "rw_mbytes_per_sec": 0, 00:16:25.735 "r_mbytes_per_sec": 0, 00:16:25.735 "w_mbytes_per_sec": 0 00:16:25.735 }, 00:16:25.735 "claimed": true, 00:16:25.735 "claim_type": "exclusive_write", 00:16:25.735 "zoned": false, 00:16:25.735 "supported_io_types": { 00:16:25.735 "read": true, 00:16:25.735 "write": true, 00:16:25.735 "unmap": true, 00:16:25.735 "flush": true, 00:16:25.735 "reset": true, 00:16:25.735 "nvme_admin": false, 00:16:25.735 "nvme_io": false, 00:16:25.735 "nvme_io_md": false, 00:16:25.735 "write_zeroes": true, 00:16:25.735 "zcopy": true, 00:16:25.735 "get_zone_info": false, 00:16:25.735 "zone_management": false, 00:16:25.735 "zone_append": false, 00:16:25.735 "compare": false, 00:16:25.735 "compare_and_write": false, 00:16:25.735 "abort": true, 00:16:25.735 "seek_hole": false, 00:16:25.735 "seek_data": false, 00:16:25.735 "copy": true, 00:16:25.735 "nvme_iov_md": false 00:16:25.735 }, 00:16:25.735 "memory_domains": [ 00:16:25.735 { 00:16:25.735 "dma_device_id": "system", 00:16:25.735 "dma_device_type": 1 00:16:25.735 }, 00:16:25.735 { 00:16:25.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.735 "dma_device_type": 2 00:16:25.735 } 00:16:25.735 ], 00:16:25.735 "driver_specific": {} 00:16:25.735 } 00:16:25.735 ] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.735 "name": "Existed_Raid", 00:16:25.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.735 "strip_size_kb": 64, 00:16:25.735 "state": "configuring", 00:16:25.735 "raid_level": "raid5f", 00:16:25.735 "superblock": false, 00:16:25.735 "num_base_bdevs": 3, 00:16:25.735 "num_base_bdevs_discovered": 1, 00:16:25.735 "num_base_bdevs_operational": 3, 00:16:25.735 "base_bdevs_list": [ 00:16:25.735 { 00:16:25.735 "name": "BaseBdev1", 00:16:25.735 "uuid": "ca01687f-2820-43c3-bbaf-cec34404a3fb", 00:16:25.735 "is_configured": true, 00:16:25.735 "data_offset": 0, 00:16:25.735 "data_size": 65536 00:16:25.735 }, 00:16:25.735 { 00:16:25.735 "name": "BaseBdev2", 00:16:25.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.735 "is_configured": false, 00:16:25.735 "data_offset": 0, 00:16:25.735 "data_size": 0 00:16:25.735 }, 00:16:25.735 { 00:16:25.735 "name": "BaseBdev3", 00:16:25.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.735 "is_configured": false, 00:16:25.735 "data_offset": 0, 00:16:25.735 "data_size": 0 00:16:25.735 } 00:16:25.735 ] 00:16:25.735 }' 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.735 09:53:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.304 [2024-11-27 09:53:27.189618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.304 [2024-11-27 09:53:27.189749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.304 [2024-11-27 09:53:27.201646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.304 [2024-11-27 09:53:27.203932] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.304 [2024-11-27 09:53:27.204034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.304 [2024-11-27 09:53:27.204077] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.304 [2024-11-27 09:53:27.204106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:26.304 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.305 "name": "Existed_Raid", 00:16:26.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.305 "strip_size_kb": 64, 00:16:26.305 "state": "configuring", 00:16:26.305 "raid_level": "raid5f", 00:16:26.305 "superblock": false, 00:16:26.305 "num_base_bdevs": 3, 00:16:26.305 "num_base_bdevs_discovered": 1, 00:16:26.305 "num_base_bdevs_operational": 3, 00:16:26.305 "base_bdevs_list": [ 00:16:26.305 { 00:16:26.305 "name": "BaseBdev1", 00:16:26.305 "uuid": "ca01687f-2820-43c3-bbaf-cec34404a3fb", 00:16:26.305 "is_configured": true, 00:16:26.305 "data_offset": 0, 00:16:26.305 "data_size": 65536 00:16:26.305 }, 00:16:26.305 { 00:16:26.305 "name": "BaseBdev2", 00:16:26.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.305 "is_configured": false, 00:16:26.305 "data_offset": 0, 00:16:26.305 "data_size": 0 00:16:26.305 }, 00:16:26.305 { 00:16:26.305 "name": "BaseBdev3", 00:16:26.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.305 "is_configured": false, 00:16:26.305 "data_offset": 0, 00:16:26.305 "data_size": 0 00:16:26.305 } 00:16:26.305 ] 00:16:26.305 }' 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.305 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.565 [2024-11-27 09:53:27.688516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:26.565 BaseBdev2 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.565 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 [ 00:16:26.825 { 00:16:26.825 "name": "BaseBdev2", 00:16:26.825 "aliases": [ 00:16:26.825 "8fe51387-a494-488f-84f6-6770079ccb2d" 00:16:26.825 ], 00:16:26.825 "product_name": "Malloc disk", 00:16:26.825 "block_size": 512, 00:16:26.825 "num_blocks": 65536, 00:16:26.825 "uuid": "8fe51387-a494-488f-84f6-6770079ccb2d", 00:16:26.825 "assigned_rate_limits": { 00:16:26.825 "rw_ios_per_sec": 0, 00:16:26.825 "rw_mbytes_per_sec": 0, 00:16:26.825 "r_mbytes_per_sec": 0, 00:16:26.825 "w_mbytes_per_sec": 0 00:16:26.825 }, 00:16:26.825 "claimed": true, 00:16:26.825 "claim_type": "exclusive_write", 00:16:26.825 "zoned": false, 00:16:26.825 "supported_io_types": { 00:16:26.825 "read": true, 00:16:26.825 "write": true, 00:16:26.825 "unmap": true, 00:16:26.825 "flush": true, 00:16:26.825 "reset": true, 00:16:26.825 "nvme_admin": false, 00:16:26.825 "nvme_io": false, 00:16:26.825 "nvme_io_md": false, 00:16:26.825 "write_zeroes": true, 00:16:26.825 "zcopy": true, 00:16:26.825 "get_zone_info": false, 00:16:26.825 "zone_management": false, 00:16:26.825 "zone_append": false, 00:16:26.825 "compare": false, 00:16:26.825 "compare_and_write": false, 00:16:26.825 "abort": true, 00:16:26.825 "seek_hole": false, 00:16:26.825 "seek_data": false, 00:16:26.825 "copy": true, 00:16:26.825 "nvme_iov_md": false 00:16:26.825 }, 00:16:26.825 "memory_domains": [ 00:16:26.825 { 00:16:26.825 "dma_device_id": "system", 00:16:26.825 "dma_device_type": 1 00:16:26.825 }, 00:16:26.825 { 00:16:26.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:26.825 "dma_device_type": 2 00:16:26.825 } 00:16:26.825 ], 00:16:26.825 "driver_specific": {} 00:16:26.825 } 00:16:26.825 ] 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.825 "name": "Existed_Raid", 00:16:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.825 "strip_size_kb": 64, 00:16:26.825 "state": "configuring", 00:16:26.825 "raid_level": "raid5f", 00:16:26.825 "superblock": false, 00:16:26.825 "num_base_bdevs": 3, 00:16:26.825 "num_base_bdevs_discovered": 2, 00:16:26.825 "num_base_bdevs_operational": 3, 00:16:26.825 "base_bdevs_list": [ 00:16:26.825 { 00:16:26.825 "name": "BaseBdev1", 00:16:26.825 "uuid": "ca01687f-2820-43c3-bbaf-cec34404a3fb", 00:16:26.825 "is_configured": true, 00:16:26.825 "data_offset": 0, 00:16:26.825 "data_size": 65536 00:16:26.825 }, 00:16:26.825 { 00:16:26.825 "name": "BaseBdev2", 00:16:26.825 "uuid": "8fe51387-a494-488f-84f6-6770079ccb2d", 00:16:26.825 "is_configured": true, 00:16:26.825 "data_offset": 0, 00:16:26.825 "data_size": 65536 00:16:26.825 }, 00:16:26.825 { 00:16:26.825 "name": "BaseBdev3", 00:16:26.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.825 "is_configured": false, 00:16:26.825 "data_offset": 0, 00:16:26.825 "data_size": 0 00:16:26.825 } 00:16:26.825 ] 00:16:26.825 }' 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.825 09:53:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.084 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:27.084 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.084 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.084 [2024-11-27 09:53:28.197475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:27.084 [2024-11-27 09:53:28.197708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:27.084 [2024-11-27 09:53:28.197751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:27.084 [2024-11-27 09:53:28.198134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:27.084 [2024-11-27 09:53:28.203943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:27.084 [2024-11-27 09:53:28.204034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:27.084 [2024-11-27 09:53:28.204473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.084 BaseBdev3 00:16:27.084 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.084 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.344 [ 00:16:27.344 { 00:16:27.344 "name": "BaseBdev3", 00:16:27.344 "aliases": [ 00:16:27.344 "69fa1f12-543b-47d4-93d9-b927a191fac6" 00:16:27.344 ], 00:16:27.344 "product_name": "Malloc disk", 00:16:27.344 "block_size": 512, 00:16:27.344 "num_blocks": 65536, 00:16:27.344 "uuid": "69fa1f12-543b-47d4-93d9-b927a191fac6", 00:16:27.344 "assigned_rate_limits": { 00:16:27.344 "rw_ios_per_sec": 0, 00:16:27.344 "rw_mbytes_per_sec": 0, 00:16:27.344 "r_mbytes_per_sec": 0, 00:16:27.344 "w_mbytes_per_sec": 0 00:16:27.344 }, 00:16:27.344 "claimed": true, 00:16:27.344 "claim_type": "exclusive_write", 00:16:27.344 "zoned": false, 00:16:27.344 "supported_io_types": { 00:16:27.344 "read": true, 00:16:27.344 "write": true, 00:16:27.344 "unmap": true, 00:16:27.344 "flush": true, 00:16:27.344 "reset": true, 00:16:27.344 "nvme_admin": false, 00:16:27.344 "nvme_io": false, 00:16:27.344 "nvme_io_md": false, 00:16:27.344 "write_zeroes": true, 00:16:27.344 "zcopy": true, 00:16:27.344 "get_zone_info": false, 00:16:27.344 "zone_management": false, 00:16:27.344 "zone_append": false, 00:16:27.344 "compare": false, 00:16:27.344 "compare_and_write": false, 00:16:27.344 "abort": true, 00:16:27.344 "seek_hole": false, 00:16:27.344 "seek_data": false, 00:16:27.344 "copy": true, 00:16:27.344 "nvme_iov_md": false 00:16:27.344 }, 00:16:27.344 "memory_domains": [ 00:16:27.344 { 00:16:27.344 "dma_device_id": "system", 00:16:27.344 "dma_device_type": 1 00:16:27.344 }, 00:16:27.344 { 00:16:27.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:27.344 "dma_device_type": 2 00:16:27.344 } 00:16:27.344 ], 00:16:27.344 "driver_specific": {} 00:16:27.344 } 00:16:27.344 ] 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:27.344 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.345 "name": "Existed_Raid", 00:16:27.345 "uuid": "be7d26b4-3a63-42e8-abe0-330dbefc0f42", 00:16:27.345 "strip_size_kb": 64, 00:16:27.345 "state": "online", 00:16:27.345 "raid_level": "raid5f", 00:16:27.345 "superblock": false, 00:16:27.345 "num_base_bdevs": 3, 00:16:27.345 "num_base_bdevs_discovered": 3, 00:16:27.345 "num_base_bdevs_operational": 3, 00:16:27.345 "base_bdevs_list": [ 00:16:27.345 { 00:16:27.345 "name": "BaseBdev1", 00:16:27.345 "uuid": "ca01687f-2820-43c3-bbaf-cec34404a3fb", 00:16:27.345 "is_configured": true, 00:16:27.345 "data_offset": 0, 00:16:27.345 "data_size": 65536 00:16:27.345 }, 00:16:27.345 { 00:16:27.345 "name": "BaseBdev2", 00:16:27.345 "uuid": "8fe51387-a494-488f-84f6-6770079ccb2d", 00:16:27.345 "is_configured": true, 00:16:27.345 "data_offset": 0, 00:16:27.345 "data_size": 65536 00:16:27.345 }, 00:16:27.345 { 00:16:27.345 "name": "BaseBdev3", 00:16:27.345 "uuid": "69fa1f12-543b-47d4-93d9-b927a191fac6", 00:16:27.345 "is_configured": true, 00:16:27.345 "data_offset": 0, 00:16:27.345 "data_size": 65536 00:16:27.345 } 00:16:27.345 ] 00:16:27.345 }' 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.345 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.604 [2024-11-27 09:53:28.695444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.604 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.865 "name": "Existed_Raid", 00:16:27.865 "aliases": [ 00:16:27.865 "be7d26b4-3a63-42e8-abe0-330dbefc0f42" 00:16:27.865 ], 00:16:27.865 "product_name": "Raid Volume", 00:16:27.865 "block_size": 512, 00:16:27.865 "num_blocks": 131072, 00:16:27.865 "uuid": "be7d26b4-3a63-42e8-abe0-330dbefc0f42", 00:16:27.865 "assigned_rate_limits": { 00:16:27.865 "rw_ios_per_sec": 0, 00:16:27.865 "rw_mbytes_per_sec": 0, 00:16:27.865 "r_mbytes_per_sec": 0, 00:16:27.865 "w_mbytes_per_sec": 0 00:16:27.865 }, 00:16:27.865 "claimed": false, 00:16:27.865 "zoned": false, 00:16:27.865 "supported_io_types": { 00:16:27.865 "read": true, 00:16:27.865 "write": true, 00:16:27.865 "unmap": false, 00:16:27.865 "flush": false, 00:16:27.865 "reset": true, 00:16:27.865 "nvme_admin": false, 00:16:27.865 "nvme_io": false, 00:16:27.865 "nvme_io_md": false, 00:16:27.865 "write_zeroes": true, 00:16:27.865 "zcopy": false, 00:16:27.865 "get_zone_info": false, 00:16:27.865 "zone_management": false, 00:16:27.865 "zone_append": false, 00:16:27.865 "compare": false, 00:16:27.865 "compare_and_write": false, 00:16:27.865 "abort": false, 00:16:27.865 "seek_hole": false, 00:16:27.865 "seek_data": false, 00:16:27.865 "copy": false, 00:16:27.865 "nvme_iov_md": false 00:16:27.865 }, 00:16:27.865 "driver_specific": { 00:16:27.865 "raid": { 00:16:27.865 "uuid": "be7d26b4-3a63-42e8-abe0-330dbefc0f42", 00:16:27.865 "strip_size_kb": 64, 00:16:27.865 "state": "online", 00:16:27.865 "raid_level": "raid5f", 00:16:27.865 "superblock": false, 00:16:27.865 "num_base_bdevs": 3, 00:16:27.865 "num_base_bdevs_discovered": 3, 00:16:27.865 "num_base_bdevs_operational": 3, 00:16:27.865 "base_bdevs_list": [ 00:16:27.865 { 00:16:27.865 "name": "BaseBdev1", 00:16:27.865 "uuid": "ca01687f-2820-43c3-bbaf-cec34404a3fb", 00:16:27.865 "is_configured": true, 00:16:27.865 "data_offset": 0, 00:16:27.865 "data_size": 65536 00:16:27.865 }, 00:16:27.865 { 00:16:27.865 "name": "BaseBdev2", 00:16:27.865 "uuid": "8fe51387-a494-488f-84f6-6770079ccb2d", 00:16:27.865 "is_configured": true, 00:16:27.865 "data_offset": 0, 00:16:27.865 "data_size": 65536 00:16:27.865 }, 00:16:27.865 { 00:16:27.865 "name": "BaseBdev3", 00:16:27.865 "uuid": "69fa1f12-543b-47d4-93d9-b927a191fac6", 00:16:27.865 "is_configured": true, 00:16:27.865 "data_offset": 0, 00:16:27.865 "data_size": 65536 00:16:27.865 } 00:16:27.865 ] 00:16:27.865 } 00:16:27.865 } 00:16:27.865 }' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:27.865 BaseBdev2 00:16:27.865 BaseBdev3' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.865 09:53:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.125 [2024-11-27 09:53:28.998808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.125 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.126 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.126 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.126 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.126 "name": "Existed_Raid", 00:16:28.126 "uuid": "be7d26b4-3a63-42e8-abe0-330dbefc0f42", 00:16:28.126 "strip_size_kb": 64, 00:16:28.126 "state": "online", 00:16:28.126 "raid_level": "raid5f", 00:16:28.126 "superblock": false, 00:16:28.126 "num_base_bdevs": 3, 00:16:28.126 "num_base_bdevs_discovered": 2, 00:16:28.126 "num_base_bdevs_operational": 2, 00:16:28.126 "base_bdevs_list": [ 00:16:28.126 { 00:16:28.126 "name": null, 00:16:28.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.126 "is_configured": false, 00:16:28.126 "data_offset": 0, 00:16:28.126 "data_size": 65536 00:16:28.126 }, 00:16:28.126 { 00:16:28.126 "name": "BaseBdev2", 00:16:28.126 "uuid": "8fe51387-a494-488f-84f6-6770079ccb2d", 00:16:28.126 "is_configured": true, 00:16:28.126 "data_offset": 0, 00:16:28.126 "data_size": 65536 00:16:28.126 }, 00:16:28.126 { 00:16:28.126 "name": "BaseBdev3", 00:16:28.126 "uuid": "69fa1f12-543b-47d4-93d9-b927a191fac6", 00:16:28.126 "is_configured": true, 00:16:28.126 "data_offset": 0, 00:16:28.126 "data_size": 65536 00:16:28.126 } 00:16:28.126 ] 00:16:28.126 }' 00:16:28.126 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.126 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 [2024-11-27 09:53:29.603200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.693 [2024-11-27 09:53:29.603433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.693 [2024-11-27 09:53:29.708057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.693 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 [2024-11-27 09:53:29.756061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.693 [2024-11-27 09:53:29.756231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 BaseBdev2 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 [ 00:16:28.952 { 00:16:28.952 "name": "BaseBdev2", 00:16:28.952 "aliases": [ 00:16:28.952 "7d819377-dd6d-4a2c-ae08-c69e75afbdc3" 00:16:28.952 ], 00:16:28.952 "product_name": "Malloc disk", 00:16:28.952 "block_size": 512, 00:16:28.952 "num_blocks": 65536, 00:16:28.952 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:28.952 "assigned_rate_limits": { 00:16:28.952 "rw_ios_per_sec": 0, 00:16:28.952 "rw_mbytes_per_sec": 0, 00:16:28.952 "r_mbytes_per_sec": 0, 00:16:28.952 "w_mbytes_per_sec": 0 00:16:28.952 }, 00:16:28.952 "claimed": false, 00:16:28.952 "zoned": false, 00:16:28.952 "supported_io_types": { 00:16:28.952 "read": true, 00:16:28.952 "write": true, 00:16:28.952 "unmap": true, 00:16:28.952 "flush": true, 00:16:28.952 "reset": true, 00:16:28.952 "nvme_admin": false, 00:16:28.952 "nvme_io": false, 00:16:28.952 "nvme_io_md": false, 00:16:28.952 "write_zeroes": true, 00:16:28.952 "zcopy": true, 00:16:28.952 "get_zone_info": false, 00:16:28.952 "zone_management": false, 00:16:28.952 "zone_append": false, 00:16:28.952 "compare": false, 00:16:28.952 "compare_and_write": false, 00:16:28.952 "abort": true, 00:16:28.952 "seek_hole": false, 00:16:28.952 "seek_data": false, 00:16:28.952 "copy": true, 00:16:28.952 "nvme_iov_md": false 00:16:28.952 }, 00:16:28.952 "memory_domains": [ 00:16:28.952 { 00:16:28.952 "dma_device_id": "system", 00:16:28.952 "dma_device_type": 1 00:16:28.952 }, 00:16:28.952 { 00:16:28.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.952 "dma_device_type": 2 00:16:28.952 } 00:16:28.952 ], 00:16:28.952 "driver_specific": {} 00:16:28.952 } 00:16:28.952 ] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:28.952 09:53:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 BaseBdev3 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.952 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.952 [ 00:16:28.952 { 00:16:28.952 "name": "BaseBdev3", 00:16:28.952 "aliases": [ 00:16:28.952 "89234d96-0b6b-46b8-9d99-426535c5bc31" 00:16:28.952 ], 00:16:28.952 "product_name": "Malloc disk", 00:16:28.952 "block_size": 512, 00:16:28.952 "num_blocks": 65536, 00:16:28.952 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:28.952 "assigned_rate_limits": { 00:16:28.952 "rw_ios_per_sec": 0, 00:16:28.952 "rw_mbytes_per_sec": 0, 00:16:28.952 "r_mbytes_per_sec": 0, 00:16:28.952 "w_mbytes_per_sec": 0 00:16:28.952 }, 00:16:28.952 "claimed": false, 00:16:28.952 "zoned": false, 00:16:28.952 "supported_io_types": { 00:16:28.952 "read": true, 00:16:28.952 "write": true, 00:16:28.952 "unmap": true, 00:16:28.952 "flush": true, 00:16:28.952 "reset": true, 00:16:28.953 "nvme_admin": false, 00:16:28.953 "nvme_io": false, 00:16:28.953 "nvme_io_md": false, 00:16:28.953 "write_zeroes": true, 00:16:28.953 "zcopy": true, 00:16:28.953 "get_zone_info": false, 00:16:28.953 "zone_management": false, 00:16:28.953 "zone_append": false, 00:16:28.953 "compare": false, 00:16:28.953 "compare_and_write": false, 00:16:28.953 "abort": true, 00:16:28.953 "seek_hole": false, 00:16:28.953 "seek_data": false, 00:16:28.953 "copy": true, 00:16:28.953 "nvme_iov_md": false 00:16:28.953 }, 00:16:28.953 "memory_domains": [ 00:16:29.212 { 00:16:29.212 "dma_device_id": "system", 00:16:29.212 "dma_device_type": 1 00:16:29.212 }, 00:16:29.212 { 00:16:29.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.212 "dma_device_type": 2 00:16:29.212 } 00:16:29.212 ], 00:16:29.212 "driver_specific": {} 00:16:29.212 } 00:16:29.212 ] 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.212 [2024-11-27 09:53:30.093647] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.212 [2024-11-27 09:53:30.093781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.212 [2024-11-27 09:53:30.093843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.212 [2024-11-27 09:53:30.096179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.212 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.212 "name": "Existed_Raid", 00:16:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.212 "strip_size_kb": 64, 00:16:29.212 "state": "configuring", 00:16:29.212 "raid_level": "raid5f", 00:16:29.212 "superblock": false, 00:16:29.212 "num_base_bdevs": 3, 00:16:29.212 "num_base_bdevs_discovered": 2, 00:16:29.212 "num_base_bdevs_operational": 3, 00:16:29.212 "base_bdevs_list": [ 00:16:29.212 { 00:16:29.212 "name": "BaseBdev1", 00:16:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.212 "is_configured": false, 00:16:29.212 "data_offset": 0, 00:16:29.212 "data_size": 0 00:16:29.212 }, 00:16:29.212 { 00:16:29.212 "name": "BaseBdev2", 00:16:29.213 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:29.213 "is_configured": true, 00:16:29.213 "data_offset": 0, 00:16:29.213 "data_size": 65536 00:16:29.213 }, 00:16:29.213 { 00:16:29.213 "name": "BaseBdev3", 00:16:29.213 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:29.213 "is_configured": true, 00:16:29.213 "data_offset": 0, 00:16:29.213 "data_size": 65536 00:16:29.213 } 00:16:29.213 ] 00:16:29.213 }' 00:16:29.213 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.213 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.473 [2024-11-27 09:53:30.544902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.473 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.473 "name": "Existed_Raid", 00:16:29.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.473 "strip_size_kb": 64, 00:16:29.473 "state": "configuring", 00:16:29.473 "raid_level": "raid5f", 00:16:29.473 "superblock": false, 00:16:29.473 "num_base_bdevs": 3, 00:16:29.473 "num_base_bdevs_discovered": 1, 00:16:29.473 "num_base_bdevs_operational": 3, 00:16:29.473 "base_bdevs_list": [ 00:16:29.473 { 00:16:29.473 "name": "BaseBdev1", 00:16:29.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.473 "is_configured": false, 00:16:29.473 "data_offset": 0, 00:16:29.473 "data_size": 0 00:16:29.473 }, 00:16:29.473 { 00:16:29.473 "name": null, 00:16:29.473 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:29.473 "is_configured": false, 00:16:29.473 "data_offset": 0, 00:16:29.473 "data_size": 65536 00:16:29.473 }, 00:16:29.473 { 00:16:29.473 "name": "BaseBdev3", 00:16:29.473 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:29.473 "is_configured": true, 00:16:29.473 "data_offset": 0, 00:16:29.473 "data_size": 65536 00:16:29.473 } 00:16:29.473 ] 00:16:29.474 }' 00:16:29.474 09:53:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.474 09:53:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.043 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.043 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.043 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.043 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:30.043 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.043 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 [2024-11-27 09:53:31.110621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.044 BaseBdev1 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.044 [ 00:16:30.044 { 00:16:30.044 "name": "BaseBdev1", 00:16:30.044 "aliases": [ 00:16:30.044 "16e02c67-3cab-4094-8ae9-196d78afb50d" 00:16:30.044 ], 00:16:30.044 "product_name": "Malloc disk", 00:16:30.044 "block_size": 512, 00:16:30.044 "num_blocks": 65536, 00:16:30.044 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:30.044 "assigned_rate_limits": { 00:16:30.044 "rw_ios_per_sec": 0, 00:16:30.044 "rw_mbytes_per_sec": 0, 00:16:30.044 "r_mbytes_per_sec": 0, 00:16:30.044 "w_mbytes_per_sec": 0 00:16:30.044 }, 00:16:30.044 "claimed": true, 00:16:30.044 "claim_type": "exclusive_write", 00:16:30.044 "zoned": false, 00:16:30.044 "supported_io_types": { 00:16:30.044 "read": true, 00:16:30.044 "write": true, 00:16:30.044 "unmap": true, 00:16:30.044 "flush": true, 00:16:30.044 "reset": true, 00:16:30.044 "nvme_admin": false, 00:16:30.044 "nvme_io": false, 00:16:30.044 "nvme_io_md": false, 00:16:30.044 "write_zeroes": true, 00:16:30.044 "zcopy": true, 00:16:30.044 "get_zone_info": false, 00:16:30.044 "zone_management": false, 00:16:30.044 "zone_append": false, 00:16:30.044 "compare": false, 00:16:30.044 "compare_and_write": false, 00:16:30.044 "abort": true, 00:16:30.044 "seek_hole": false, 00:16:30.044 "seek_data": false, 00:16:30.044 "copy": true, 00:16:30.044 "nvme_iov_md": false 00:16:30.044 }, 00:16:30.044 "memory_domains": [ 00:16:30.044 { 00:16:30.044 "dma_device_id": "system", 00:16:30.044 "dma_device_type": 1 00:16:30.044 }, 00:16:30.044 { 00:16:30.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.044 "dma_device_type": 2 00:16:30.044 } 00:16:30.044 ], 00:16:30.044 "driver_specific": {} 00:16:30.044 } 00:16:30.044 ] 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.044 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.304 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.304 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.304 "name": "Existed_Raid", 00:16:30.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.304 "strip_size_kb": 64, 00:16:30.304 "state": "configuring", 00:16:30.304 "raid_level": "raid5f", 00:16:30.304 "superblock": false, 00:16:30.304 "num_base_bdevs": 3, 00:16:30.304 "num_base_bdevs_discovered": 2, 00:16:30.304 "num_base_bdevs_operational": 3, 00:16:30.304 "base_bdevs_list": [ 00:16:30.304 { 00:16:30.304 "name": "BaseBdev1", 00:16:30.305 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:30.305 "is_configured": true, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 }, 00:16:30.305 { 00:16:30.305 "name": null, 00:16:30.305 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:30.305 "is_configured": false, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 }, 00:16:30.305 { 00:16:30.305 "name": "BaseBdev3", 00:16:30.305 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:30.305 "is_configured": true, 00:16:30.305 "data_offset": 0, 00:16:30.305 "data_size": 65536 00:16:30.305 } 00:16:30.305 ] 00:16:30.305 }' 00:16:30.305 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.305 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.565 [2024-11-27 09:53:31.649794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.565 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.825 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.825 "name": "Existed_Raid", 00:16:30.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.825 "strip_size_kb": 64, 00:16:30.825 "state": "configuring", 00:16:30.825 "raid_level": "raid5f", 00:16:30.825 "superblock": false, 00:16:30.825 "num_base_bdevs": 3, 00:16:30.825 "num_base_bdevs_discovered": 1, 00:16:30.825 "num_base_bdevs_operational": 3, 00:16:30.825 "base_bdevs_list": [ 00:16:30.825 { 00:16:30.825 "name": "BaseBdev1", 00:16:30.825 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:30.825 "is_configured": true, 00:16:30.825 "data_offset": 0, 00:16:30.825 "data_size": 65536 00:16:30.825 }, 00:16:30.825 { 00:16:30.825 "name": null, 00:16:30.825 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:30.825 "is_configured": false, 00:16:30.825 "data_offset": 0, 00:16:30.825 "data_size": 65536 00:16:30.825 }, 00:16:30.825 { 00:16:30.825 "name": null, 00:16:30.825 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:30.825 "is_configured": false, 00:16:30.825 "data_offset": 0, 00:16:30.825 "data_size": 65536 00:16:30.825 } 00:16:30.825 ] 00:16:30.825 }' 00:16:30.825 09:53:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.825 09:53:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.084 [2024-11-27 09:53:32.185194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.084 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.344 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.344 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.344 "name": "Existed_Raid", 00:16:31.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.344 "strip_size_kb": 64, 00:16:31.344 "state": "configuring", 00:16:31.344 "raid_level": "raid5f", 00:16:31.344 "superblock": false, 00:16:31.344 "num_base_bdevs": 3, 00:16:31.344 "num_base_bdevs_discovered": 2, 00:16:31.344 "num_base_bdevs_operational": 3, 00:16:31.344 "base_bdevs_list": [ 00:16:31.344 { 00:16:31.344 "name": "BaseBdev1", 00:16:31.344 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:31.344 "is_configured": true, 00:16:31.344 "data_offset": 0, 00:16:31.344 "data_size": 65536 00:16:31.344 }, 00:16:31.344 { 00:16:31.344 "name": null, 00:16:31.344 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:31.344 "is_configured": false, 00:16:31.344 "data_offset": 0, 00:16:31.344 "data_size": 65536 00:16:31.344 }, 00:16:31.344 { 00:16:31.344 "name": "BaseBdev3", 00:16:31.344 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:31.344 "is_configured": true, 00:16:31.344 "data_offset": 0, 00:16:31.344 "data_size": 65536 00:16:31.344 } 00:16:31.344 ] 00:16:31.344 }' 00:16:31.344 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.344 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.604 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.604 [2024-11-27 09:53:32.676507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.864 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.864 "name": "Existed_Raid", 00:16:31.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.864 "strip_size_kb": 64, 00:16:31.864 "state": "configuring", 00:16:31.864 "raid_level": "raid5f", 00:16:31.864 "superblock": false, 00:16:31.864 "num_base_bdevs": 3, 00:16:31.864 "num_base_bdevs_discovered": 1, 00:16:31.864 "num_base_bdevs_operational": 3, 00:16:31.864 "base_bdevs_list": [ 00:16:31.864 { 00:16:31.864 "name": null, 00:16:31.864 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:31.864 "is_configured": false, 00:16:31.865 "data_offset": 0, 00:16:31.865 "data_size": 65536 00:16:31.865 }, 00:16:31.865 { 00:16:31.865 "name": null, 00:16:31.865 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:31.865 "is_configured": false, 00:16:31.865 "data_offset": 0, 00:16:31.865 "data_size": 65536 00:16:31.865 }, 00:16:31.865 { 00:16:31.865 "name": "BaseBdev3", 00:16:31.865 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:31.865 "is_configured": true, 00:16:31.865 "data_offset": 0, 00:16:31.865 "data_size": 65536 00:16:31.865 } 00:16:31.865 ] 00:16:31.865 }' 00:16:31.865 09:53:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.865 09:53:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.124 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.124 [2024-11-27 09:53:33.248522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.384 "name": "Existed_Raid", 00:16:32.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.384 "strip_size_kb": 64, 00:16:32.384 "state": "configuring", 00:16:32.384 "raid_level": "raid5f", 00:16:32.384 "superblock": false, 00:16:32.384 "num_base_bdevs": 3, 00:16:32.384 "num_base_bdevs_discovered": 2, 00:16:32.384 "num_base_bdevs_operational": 3, 00:16:32.384 "base_bdevs_list": [ 00:16:32.384 { 00:16:32.384 "name": null, 00:16:32.384 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:32.384 "is_configured": false, 00:16:32.384 "data_offset": 0, 00:16:32.384 "data_size": 65536 00:16:32.384 }, 00:16:32.384 { 00:16:32.384 "name": "BaseBdev2", 00:16:32.384 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:32.384 "is_configured": true, 00:16:32.384 "data_offset": 0, 00:16:32.384 "data_size": 65536 00:16:32.384 }, 00:16:32.384 { 00:16:32.384 "name": "BaseBdev3", 00:16:32.384 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:32.384 "is_configured": true, 00:16:32.384 "data_offset": 0, 00:16:32.384 "data_size": 65536 00:16:32.384 } 00:16:32.384 ] 00:16:32.384 }' 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.384 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 16e02c67-3cab-4094-8ae9-196d78afb50d 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.644 [2024-11-27 09:53:33.729103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:32.644 [2024-11-27 09:53:33.729252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:32.644 [2024-11-27 09:53:33.729287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:32.644 [2024-11-27 09:53:33.729627] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:32.644 [2024-11-27 09:53:33.734852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:32.644 [2024-11-27 09:53:33.734923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:32.644 NewBaseBdev 00:16:32.644 [2024-11-27 09:53:33.735317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:32.644 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.645 [ 00:16:32.645 { 00:16:32.645 "name": "NewBaseBdev", 00:16:32.645 "aliases": [ 00:16:32.645 "16e02c67-3cab-4094-8ae9-196d78afb50d" 00:16:32.645 ], 00:16:32.645 "product_name": "Malloc disk", 00:16:32.645 "block_size": 512, 00:16:32.645 "num_blocks": 65536, 00:16:32.645 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:32.645 "assigned_rate_limits": { 00:16:32.645 "rw_ios_per_sec": 0, 00:16:32.645 "rw_mbytes_per_sec": 0, 00:16:32.645 "r_mbytes_per_sec": 0, 00:16:32.645 "w_mbytes_per_sec": 0 00:16:32.645 }, 00:16:32.645 "claimed": true, 00:16:32.645 "claim_type": "exclusive_write", 00:16:32.645 "zoned": false, 00:16:32.645 "supported_io_types": { 00:16:32.645 "read": true, 00:16:32.645 "write": true, 00:16:32.645 "unmap": true, 00:16:32.645 "flush": true, 00:16:32.645 "reset": true, 00:16:32.645 "nvme_admin": false, 00:16:32.645 "nvme_io": false, 00:16:32.645 "nvme_io_md": false, 00:16:32.645 "write_zeroes": true, 00:16:32.645 "zcopy": true, 00:16:32.645 "get_zone_info": false, 00:16:32.645 "zone_management": false, 00:16:32.645 "zone_append": false, 00:16:32.645 "compare": false, 00:16:32.645 "compare_and_write": false, 00:16:32.645 "abort": true, 00:16:32.645 "seek_hole": false, 00:16:32.645 "seek_data": false, 00:16:32.645 "copy": true, 00:16:32.645 "nvme_iov_md": false 00:16:32.645 }, 00:16:32.645 "memory_domains": [ 00:16:32.645 { 00:16:32.645 "dma_device_id": "system", 00:16:32.645 "dma_device_type": 1 00:16:32.645 }, 00:16:32.645 { 00:16:32.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.645 "dma_device_type": 2 00:16:32.645 } 00:16:32.645 ], 00:16:32.645 "driver_specific": {} 00:16:32.645 } 00:16:32.645 ] 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.645 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.904 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.904 "name": "Existed_Raid", 00:16:32.904 "uuid": "a4de2729-c845-46e4-9ea8-535ba9c5d79c", 00:16:32.904 "strip_size_kb": 64, 00:16:32.904 "state": "online", 00:16:32.904 "raid_level": "raid5f", 00:16:32.904 "superblock": false, 00:16:32.904 "num_base_bdevs": 3, 00:16:32.904 "num_base_bdevs_discovered": 3, 00:16:32.904 "num_base_bdevs_operational": 3, 00:16:32.904 "base_bdevs_list": [ 00:16:32.904 { 00:16:32.904 "name": "NewBaseBdev", 00:16:32.904 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:32.904 "is_configured": true, 00:16:32.904 "data_offset": 0, 00:16:32.904 "data_size": 65536 00:16:32.904 }, 00:16:32.905 { 00:16:32.905 "name": "BaseBdev2", 00:16:32.905 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:32.905 "is_configured": true, 00:16:32.905 "data_offset": 0, 00:16:32.905 "data_size": 65536 00:16:32.905 }, 00:16:32.905 { 00:16:32.905 "name": "BaseBdev3", 00:16:32.905 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:32.905 "is_configured": true, 00:16:32.905 "data_offset": 0, 00:16:32.905 "data_size": 65536 00:16:32.905 } 00:16:32.905 ] 00:16:32.905 }' 00:16:32.905 09:53:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.905 09:53:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.165 [2024-11-27 09:53:34.210039] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:33.165 "name": "Existed_Raid", 00:16:33.165 "aliases": [ 00:16:33.165 "a4de2729-c845-46e4-9ea8-535ba9c5d79c" 00:16:33.165 ], 00:16:33.165 "product_name": "Raid Volume", 00:16:33.165 "block_size": 512, 00:16:33.165 "num_blocks": 131072, 00:16:33.165 "uuid": "a4de2729-c845-46e4-9ea8-535ba9c5d79c", 00:16:33.165 "assigned_rate_limits": { 00:16:33.165 "rw_ios_per_sec": 0, 00:16:33.165 "rw_mbytes_per_sec": 0, 00:16:33.165 "r_mbytes_per_sec": 0, 00:16:33.165 "w_mbytes_per_sec": 0 00:16:33.165 }, 00:16:33.165 "claimed": false, 00:16:33.165 "zoned": false, 00:16:33.165 "supported_io_types": { 00:16:33.165 "read": true, 00:16:33.165 "write": true, 00:16:33.165 "unmap": false, 00:16:33.165 "flush": false, 00:16:33.165 "reset": true, 00:16:33.165 "nvme_admin": false, 00:16:33.165 "nvme_io": false, 00:16:33.165 "nvme_io_md": false, 00:16:33.165 "write_zeroes": true, 00:16:33.165 "zcopy": false, 00:16:33.165 "get_zone_info": false, 00:16:33.165 "zone_management": false, 00:16:33.165 "zone_append": false, 00:16:33.165 "compare": false, 00:16:33.165 "compare_and_write": false, 00:16:33.165 "abort": false, 00:16:33.165 "seek_hole": false, 00:16:33.165 "seek_data": false, 00:16:33.165 "copy": false, 00:16:33.165 "nvme_iov_md": false 00:16:33.165 }, 00:16:33.165 "driver_specific": { 00:16:33.165 "raid": { 00:16:33.165 "uuid": "a4de2729-c845-46e4-9ea8-535ba9c5d79c", 00:16:33.165 "strip_size_kb": 64, 00:16:33.165 "state": "online", 00:16:33.165 "raid_level": "raid5f", 00:16:33.165 "superblock": false, 00:16:33.165 "num_base_bdevs": 3, 00:16:33.165 "num_base_bdevs_discovered": 3, 00:16:33.165 "num_base_bdevs_operational": 3, 00:16:33.165 "base_bdevs_list": [ 00:16:33.165 { 00:16:33.165 "name": "NewBaseBdev", 00:16:33.165 "uuid": "16e02c67-3cab-4094-8ae9-196d78afb50d", 00:16:33.165 "is_configured": true, 00:16:33.165 "data_offset": 0, 00:16:33.165 "data_size": 65536 00:16:33.165 }, 00:16:33.165 { 00:16:33.165 "name": "BaseBdev2", 00:16:33.165 "uuid": "7d819377-dd6d-4a2c-ae08-c69e75afbdc3", 00:16:33.165 "is_configured": true, 00:16:33.165 "data_offset": 0, 00:16:33.165 "data_size": 65536 00:16:33.165 }, 00:16:33.165 { 00:16:33.165 "name": "BaseBdev3", 00:16:33.165 "uuid": "89234d96-0b6b-46b8-9d99-426535c5bc31", 00:16:33.165 "is_configured": true, 00:16:33.165 "data_offset": 0, 00:16:33.165 "data_size": 65536 00:16:33.165 } 00:16:33.165 ] 00:16:33.165 } 00:16:33.165 } 00:16:33.165 }' 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:33.165 BaseBdev2 00:16:33.165 BaseBdev3' 00:16:33.165 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:33.425 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.426 [2024-11-27 09:53:34.477305] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.426 [2024-11-27 09:53:34.477390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.426 [2024-11-27 09:53:34.477513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.426 [2024-11-27 09:53:34.477845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.426 [2024-11-27 09:53:34.477863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80200 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80200 ']' 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80200 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80200 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:33.426 killing process with pid 80200 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80200' 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80200 00:16:33.426 [2024-11-27 09:53:34.528615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.426 09:53:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80200 00:16:33.996 [2024-11-27 09:53:34.853049] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:35.386 00:16:35.386 real 0m10.886s 00:16:35.386 user 0m16.873s 00:16:35.386 sys 0m2.223s 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.386 ************************************ 00:16:35.386 END TEST raid5f_state_function_test 00:16:35.386 ************************************ 00:16:35.386 09:53:36 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:16:35.386 09:53:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:35.386 09:53:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.386 09:53:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:35.386 ************************************ 00:16:35.386 START TEST raid5f_state_function_test_sb 00:16:35.386 ************************************ 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80818 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:35.386 Process raid pid: 80818 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80818' 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80818 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80818 ']' 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.386 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.387 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.387 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.387 09:53:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.387 [2024-11-27 09:53:36.273525] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:16:35.387 [2024-11-27 09:53:36.273734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.387 [2024-11-27 09:53:36.455112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.646 [2024-11-27 09:53:36.599479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.905 [2024-11-27 09:53:36.842076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.905 [2024-11-27 09:53:36.842270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.164 [2024-11-27 09:53:37.132697] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.164 [2024-11-27 09:53:37.132838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.164 [2024-11-27 09:53:37.132885] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.164 [2024-11-27 09:53:37.132918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.164 [2024-11-27 09:53:37.132957] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.164 [2024-11-27 09:53:37.132973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.164 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.165 "name": "Existed_Raid", 00:16:36.165 "uuid": "1cf92ad7-7153-471d-ae16-625c3227dd9c", 00:16:36.165 "strip_size_kb": 64, 00:16:36.165 "state": "configuring", 00:16:36.165 "raid_level": "raid5f", 00:16:36.165 "superblock": true, 00:16:36.165 "num_base_bdevs": 3, 00:16:36.165 "num_base_bdevs_discovered": 0, 00:16:36.165 "num_base_bdevs_operational": 3, 00:16:36.165 "base_bdevs_list": [ 00:16:36.165 { 00:16:36.165 "name": "BaseBdev1", 00:16:36.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.165 "is_configured": false, 00:16:36.165 "data_offset": 0, 00:16:36.165 "data_size": 0 00:16:36.165 }, 00:16:36.165 { 00:16:36.165 "name": "BaseBdev2", 00:16:36.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.165 "is_configured": false, 00:16:36.165 "data_offset": 0, 00:16:36.165 "data_size": 0 00:16:36.165 }, 00:16:36.165 { 00:16:36.165 "name": "BaseBdev3", 00:16:36.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.165 "is_configured": false, 00:16:36.165 "data_offset": 0, 00:16:36.165 "data_size": 0 00:16:36.165 } 00:16:36.165 ] 00:16:36.165 }' 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.165 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.735 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.736 [2024-11-27 09:53:37.615780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.736 [2024-11-27 09:53:37.615913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.736 [2024-11-27 09:53:37.627776] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.736 [2024-11-27 09:53:37.627899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.736 [2024-11-27 09:53:37.627937] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.736 [2024-11-27 09:53:37.627966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.736 [2024-11-27 09:53:37.627988] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:36.736 [2024-11-27 09:53:37.628028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.736 [2024-11-27 09:53:37.682327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.736 BaseBdev1 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.736 [ 00:16:36.736 { 00:16:36.736 "name": "BaseBdev1", 00:16:36.736 "aliases": [ 00:16:36.736 "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8" 00:16:36.736 ], 00:16:36.736 "product_name": "Malloc disk", 00:16:36.736 "block_size": 512, 00:16:36.736 "num_blocks": 65536, 00:16:36.736 "uuid": "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8", 00:16:36.736 "assigned_rate_limits": { 00:16:36.736 "rw_ios_per_sec": 0, 00:16:36.736 "rw_mbytes_per_sec": 0, 00:16:36.736 "r_mbytes_per_sec": 0, 00:16:36.736 "w_mbytes_per_sec": 0 00:16:36.736 }, 00:16:36.736 "claimed": true, 00:16:36.736 "claim_type": "exclusive_write", 00:16:36.736 "zoned": false, 00:16:36.736 "supported_io_types": { 00:16:36.736 "read": true, 00:16:36.736 "write": true, 00:16:36.736 "unmap": true, 00:16:36.736 "flush": true, 00:16:36.736 "reset": true, 00:16:36.736 "nvme_admin": false, 00:16:36.736 "nvme_io": false, 00:16:36.736 "nvme_io_md": false, 00:16:36.736 "write_zeroes": true, 00:16:36.736 "zcopy": true, 00:16:36.736 "get_zone_info": false, 00:16:36.736 "zone_management": false, 00:16:36.736 "zone_append": false, 00:16:36.736 "compare": false, 00:16:36.736 "compare_and_write": false, 00:16:36.736 "abort": true, 00:16:36.736 "seek_hole": false, 00:16:36.736 "seek_data": false, 00:16:36.736 "copy": true, 00:16:36.736 "nvme_iov_md": false 00:16:36.736 }, 00:16:36.736 "memory_domains": [ 00:16:36.736 { 00:16:36.736 "dma_device_id": "system", 00:16:36.736 "dma_device_type": 1 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.736 "dma_device_type": 2 00:16:36.736 } 00:16:36.736 ], 00:16:36.736 "driver_specific": {} 00:16:36.736 } 00:16:36.736 ] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.736 "name": "Existed_Raid", 00:16:36.736 "uuid": "a9f078b4-358c-4f7f-a453-fb25376d9f45", 00:16:36.736 "strip_size_kb": 64, 00:16:36.736 "state": "configuring", 00:16:36.736 "raid_level": "raid5f", 00:16:36.736 "superblock": true, 00:16:36.736 "num_base_bdevs": 3, 00:16:36.736 "num_base_bdevs_discovered": 1, 00:16:36.736 "num_base_bdevs_operational": 3, 00:16:36.736 "base_bdevs_list": [ 00:16:36.736 { 00:16:36.736 "name": "BaseBdev1", 00:16:36.736 "uuid": "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8", 00:16:36.736 "is_configured": true, 00:16:36.736 "data_offset": 2048, 00:16:36.736 "data_size": 63488 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "name": "BaseBdev2", 00:16:36.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.736 "is_configured": false, 00:16:36.736 "data_offset": 0, 00:16:36.736 "data_size": 0 00:16:36.736 }, 00:16:36.736 { 00:16:36.736 "name": "BaseBdev3", 00:16:36.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.736 "is_configured": false, 00:16:36.736 "data_offset": 0, 00:16:36.736 "data_size": 0 00:16:36.736 } 00:16:36.736 ] 00:16:36.736 }' 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.736 09:53:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.305 [2024-11-27 09:53:38.161650] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:37.305 [2024-11-27 09:53:38.161820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.305 [2024-11-27 09:53:38.173726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.305 [2024-11-27 09:53:38.176169] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:37.305 [2024-11-27 09:53:38.176282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:37.305 [2024-11-27 09:53:38.176318] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:37.305 [2024-11-27 09:53:38.176347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.305 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.305 "name": "Existed_Raid", 00:16:37.306 "uuid": "92033afb-f818-4068-8a56-f5991d1f5f99", 00:16:37.306 "strip_size_kb": 64, 00:16:37.306 "state": "configuring", 00:16:37.306 "raid_level": "raid5f", 00:16:37.306 "superblock": true, 00:16:37.306 "num_base_bdevs": 3, 00:16:37.306 "num_base_bdevs_discovered": 1, 00:16:37.306 "num_base_bdevs_operational": 3, 00:16:37.306 "base_bdevs_list": [ 00:16:37.306 { 00:16:37.306 "name": "BaseBdev1", 00:16:37.306 "uuid": "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8", 00:16:37.306 "is_configured": true, 00:16:37.306 "data_offset": 2048, 00:16:37.306 "data_size": 63488 00:16:37.306 }, 00:16:37.306 { 00:16:37.306 "name": "BaseBdev2", 00:16:37.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.306 "is_configured": false, 00:16:37.306 "data_offset": 0, 00:16:37.306 "data_size": 0 00:16:37.306 }, 00:16:37.306 { 00:16:37.306 "name": "BaseBdev3", 00:16:37.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.306 "is_configured": false, 00:16:37.306 "data_offset": 0, 00:16:37.306 "data_size": 0 00:16:37.306 } 00:16:37.306 ] 00:16:37.306 }' 00:16:37.306 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.306 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 [2024-11-27 09:53:38.621913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.565 BaseBdev2 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.565 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 [ 00:16:37.565 { 00:16:37.565 "name": "BaseBdev2", 00:16:37.565 "aliases": [ 00:16:37.565 "1d91466b-32e8-491e-aaea-b24092257a47" 00:16:37.565 ], 00:16:37.565 "product_name": "Malloc disk", 00:16:37.566 "block_size": 512, 00:16:37.566 "num_blocks": 65536, 00:16:37.566 "uuid": "1d91466b-32e8-491e-aaea-b24092257a47", 00:16:37.566 "assigned_rate_limits": { 00:16:37.566 "rw_ios_per_sec": 0, 00:16:37.566 "rw_mbytes_per_sec": 0, 00:16:37.566 "r_mbytes_per_sec": 0, 00:16:37.566 "w_mbytes_per_sec": 0 00:16:37.566 }, 00:16:37.566 "claimed": true, 00:16:37.566 "claim_type": "exclusive_write", 00:16:37.566 "zoned": false, 00:16:37.566 "supported_io_types": { 00:16:37.566 "read": true, 00:16:37.566 "write": true, 00:16:37.566 "unmap": true, 00:16:37.566 "flush": true, 00:16:37.566 "reset": true, 00:16:37.566 "nvme_admin": false, 00:16:37.566 "nvme_io": false, 00:16:37.566 "nvme_io_md": false, 00:16:37.566 "write_zeroes": true, 00:16:37.566 "zcopy": true, 00:16:37.566 "get_zone_info": false, 00:16:37.566 "zone_management": false, 00:16:37.566 "zone_append": false, 00:16:37.566 "compare": false, 00:16:37.566 "compare_and_write": false, 00:16:37.566 "abort": true, 00:16:37.566 "seek_hole": false, 00:16:37.566 "seek_data": false, 00:16:37.566 "copy": true, 00:16:37.566 "nvme_iov_md": false 00:16:37.566 }, 00:16:37.566 "memory_domains": [ 00:16:37.566 { 00:16:37.566 "dma_device_id": "system", 00:16:37.566 "dma_device_type": 1 00:16:37.566 }, 00:16:37.566 { 00:16:37.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.566 "dma_device_type": 2 00:16:37.566 } 00:16:37.566 ], 00:16:37.566 "driver_specific": {} 00:16:37.566 } 00:16:37.566 ] 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.566 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.825 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.825 "name": "Existed_Raid", 00:16:37.825 "uuid": "92033afb-f818-4068-8a56-f5991d1f5f99", 00:16:37.825 "strip_size_kb": 64, 00:16:37.825 "state": "configuring", 00:16:37.825 "raid_level": "raid5f", 00:16:37.825 "superblock": true, 00:16:37.825 "num_base_bdevs": 3, 00:16:37.825 "num_base_bdevs_discovered": 2, 00:16:37.825 "num_base_bdevs_operational": 3, 00:16:37.825 "base_bdevs_list": [ 00:16:37.825 { 00:16:37.825 "name": "BaseBdev1", 00:16:37.825 "uuid": "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8", 00:16:37.825 "is_configured": true, 00:16:37.825 "data_offset": 2048, 00:16:37.825 "data_size": 63488 00:16:37.825 }, 00:16:37.825 { 00:16:37.825 "name": "BaseBdev2", 00:16:37.825 "uuid": "1d91466b-32e8-491e-aaea-b24092257a47", 00:16:37.825 "is_configured": true, 00:16:37.825 "data_offset": 2048, 00:16:37.826 "data_size": 63488 00:16:37.826 }, 00:16:37.826 { 00:16:37.826 "name": "BaseBdev3", 00:16:37.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.826 "is_configured": false, 00:16:37.826 "data_offset": 0, 00:16:37.826 "data_size": 0 00:16:37.826 } 00:16:37.826 ] 00:16:37.826 }' 00:16:37.826 09:53:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.826 09:53:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 [2024-11-27 09:53:39.143904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.086 [2024-11-27 09:53:39.144516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:38.086 [2024-11-27 09:53:39.144596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.086 [2024-11-27 09:53:39.144964] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:38.086 BaseBdev3 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 [2024-11-27 09:53:39.150950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:38.086 [2024-11-27 09:53:39.151034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:38.086 [2024-11-27 09:53:39.151336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.086 [ 00:16:38.086 { 00:16:38.086 "name": "BaseBdev3", 00:16:38.086 "aliases": [ 00:16:38.086 "e8b288c8-fccd-42c1-a877-1b0691b45383" 00:16:38.086 ], 00:16:38.086 "product_name": "Malloc disk", 00:16:38.086 "block_size": 512, 00:16:38.086 "num_blocks": 65536, 00:16:38.086 "uuid": "e8b288c8-fccd-42c1-a877-1b0691b45383", 00:16:38.086 "assigned_rate_limits": { 00:16:38.086 "rw_ios_per_sec": 0, 00:16:38.086 "rw_mbytes_per_sec": 0, 00:16:38.086 "r_mbytes_per_sec": 0, 00:16:38.086 "w_mbytes_per_sec": 0 00:16:38.086 }, 00:16:38.086 "claimed": true, 00:16:38.086 "claim_type": "exclusive_write", 00:16:38.086 "zoned": false, 00:16:38.086 "supported_io_types": { 00:16:38.086 "read": true, 00:16:38.086 "write": true, 00:16:38.086 "unmap": true, 00:16:38.086 "flush": true, 00:16:38.086 "reset": true, 00:16:38.086 "nvme_admin": false, 00:16:38.086 "nvme_io": false, 00:16:38.086 "nvme_io_md": false, 00:16:38.086 "write_zeroes": true, 00:16:38.086 "zcopy": true, 00:16:38.086 "get_zone_info": false, 00:16:38.086 "zone_management": false, 00:16:38.086 "zone_append": false, 00:16:38.086 "compare": false, 00:16:38.086 "compare_and_write": false, 00:16:38.086 "abort": true, 00:16:38.086 "seek_hole": false, 00:16:38.086 "seek_data": false, 00:16:38.086 "copy": true, 00:16:38.086 "nvme_iov_md": false 00:16:38.086 }, 00:16:38.086 "memory_domains": [ 00:16:38.086 { 00:16:38.086 "dma_device_id": "system", 00:16:38.086 "dma_device_type": 1 00:16:38.086 }, 00:16:38.086 { 00:16:38.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.086 "dma_device_type": 2 00:16:38.086 } 00:16:38.086 ], 00:16:38.086 "driver_specific": {} 00:16:38.086 } 00:16:38.086 ] 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:38.086 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.087 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.345 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.345 "name": "Existed_Raid", 00:16:38.345 "uuid": "92033afb-f818-4068-8a56-f5991d1f5f99", 00:16:38.345 "strip_size_kb": 64, 00:16:38.345 "state": "online", 00:16:38.345 "raid_level": "raid5f", 00:16:38.345 "superblock": true, 00:16:38.345 "num_base_bdevs": 3, 00:16:38.345 "num_base_bdevs_discovered": 3, 00:16:38.345 "num_base_bdevs_operational": 3, 00:16:38.345 "base_bdevs_list": [ 00:16:38.345 { 00:16:38.345 "name": "BaseBdev1", 00:16:38.345 "uuid": "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8", 00:16:38.345 "is_configured": true, 00:16:38.345 "data_offset": 2048, 00:16:38.345 "data_size": 63488 00:16:38.345 }, 00:16:38.345 { 00:16:38.345 "name": "BaseBdev2", 00:16:38.345 "uuid": "1d91466b-32e8-491e-aaea-b24092257a47", 00:16:38.345 "is_configured": true, 00:16:38.345 "data_offset": 2048, 00:16:38.345 "data_size": 63488 00:16:38.345 }, 00:16:38.345 { 00:16:38.345 "name": "BaseBdev3", 00:16:38.345 "uuid": "e8b288c8-fccd-42c1-a877-1b0691b45383", 00:16:38.345 "is_configured": true, 00:16:38.345 "data_offset": 2048, 00:16:38.345 "data_size": 63488 00:16:38.345 } 00:16:38.345 ] 00:16:38.345 }' 00:16:38.345 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.345 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:38.603 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.604 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.604 [2024-11-27 09:53:39.686033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.604 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.604 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:38.604 "name": "Existed_Raid", 00:16:38.604 "aliases": [ 00:16:38.604 "92033afb-f818-4068-8a56-f5991d1f5f99" 00:16:38.604 ], 00:16:38.604 "product_name": "Raid Volume", 00:16:38.604 "block_size": 512, 00:16:38.604 "num_blocks": 126976, 00:16:38.604 "uuid": "92033afb-f818-4068-8a56-f5991d1f5f99", 00:16:38.604 "assigned_rate_limits": { 00:16:38.604 "rw_ios_per_sec": 0, 00:16:38.604 "rw_mbytes_per_sec": 0, 00:16:38.604 "r_mbytes_per_sec": 0, 00:16:38.604 "w_mbytes_per_sec": 0 00:16:38.604 }, 00:16:38.604 "claimed": false, 00:16:38.604 "zoned": false, 00:16:38.604 "supported_io_types": { 00:16:38.604 "read": true, 00:16:38.604 "write": true, 00:16:38.604 "unmap": false, 00:16:38.604 "flush": false, 00:16:38.604 "reset": true, 00:16:38.604 "nvme_admin": false, 00:16:38.604 "nvme_io": false, 00:16:38.604 "nvme_io_md": false, 00:16:38.604 "write_zeroes": true, 00:16:38.604 "zcopy": false, 00:16:38.604 "get_zone_info": false, 00:16:38.604 "zone_management": false, 00:16:38.604 "zone_append": false, 00:16:38.604 "compare": false, 00:16:38.604 "compare_and_write": false, 00:16:38.604 "abort": false, 00:16:38.604 "seek_hole": false, 00:16:38.604 "seek_data": false, 00:16:38.604 "copy": false, 00:16:38.604 "nvme_iov_md": false 00:16:38.604 }, 00:16:38.604 "driver_specific": { 00:16:38.604 "raid": { 00:16:38.604 "uuid": "92033afb-f818-4068-8a56-f5991d1f5f99", 00:16:38.604 "strip_size_kb": 64, 00:16:38.604 "state": "online", 00:16:38.604 "raid_level": "raid5f", 00:16:38.604 "superblock": true, 00:16:38.604 "num_base_bdevs": 3, 00:16:38.604 "num_base_bdevs_discovered": 3, 00:16:38.604 "num_base_bdevs_operational": 3, 00:16:38.604 "base_bdevs_list": [ 00:16:38.604 { 00:16:38.604 "name": "BaseBdev1", 00:16:38.604 "uuid": "cbaea8d5-7105-4ff6-a6bc-be3a8bfef8d8", 00:16:38.604 "is_configured": true, 00:16:38.604 "data_offset": 2048, 00:16:38.604 "data_size": 63488 00:16:38.604 }, 00:16:38.604 { 00:16:38.604 "name": "BaseBdev2", 00:16:38.604 "uuid": "1d91466b-32e8-491e-aaea-b24092257a47", 00:16:38.604 "is_configured": true, 00:16:38.604 "data_offset": 2048, 00:16:38.604 "data_size": 63488 00:16:38.604 }, 00:16:38.604 { 00:16:38.604 "name": "BaseBdev3", 00:16:38.604 "uuid": "e8b288c8-fccd-42c1-a877-1b0691b45383", 00:16:38.604 "is_configured": true, 00:16:38.604 "data_offset": 2048, 00:16:38.604 "data_size": 63488 00:16:38.604 } 00:16:38.604 ] 00:16:38.604 } 00:16:38.604 } 00:16:38.604 }' 00:16:38.604 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:38.863 BaseBdev2 00:16:38.863 BaseBdev3' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.863 09:53:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.864 [2024-11-27 09:53:39.905505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.122 "name": "Existed_Raid", 00:16:39.122 "uuid": "92033afb-f818-4068-8a56-f5991d1f5f99", 00:16:39.122 "strip_size_kb": 64, 00:16:39.122 "state": "online", 00:16:39.122 "raid_level": "raid5f", 00:16:39.122 "superblock": true, 00:16:39.122 "num_base_bdevs": 3, 00:16:39.122 "num_base_bdevs_discovered": 2, 00:16:39.122 "num_base_bdevs_operational": 2, 00:16:39.122 "base_bdevs_list": [ 00:16:39.122 { 00:16:39.122 "name": null, 00:16:39.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.122 "is_configured": false, 00:16:39.122 "data_offset": 0, 00:16:39.122 "data_size": 63488 00:16:39.122 }, 00:16:39.122 { 00:16:39.122 "name": "BaseBdev2", 00:16:39.122 "uuid": "1d91466b-32e8-491e-aaea-b24092257a47", 00:16:39.122 "is_configured": true, 00:16:39.122 "data_offset": 2048, 00:16:39.122 "data_size": 63488 00:16:39.122 }, 00:16:39.122 { 00:16:39.122 "name": "BaseBdev3", 00:16:39.122 "uuid": "e8b288c8-fccd-42c1-a877-1b0691b45383", 00:16:39.122 "is_configured": true, 00:16:39.122 "data_offset": 2048, 00:16:39.122 "data_size": 63488 00:16:39.122 } 00:16:39.122 ] 00:16:39.122 }' 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.122 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.381 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 [2024-11-27 09:53:40.525928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:39.640 [2024-11-27 09:53:40.526270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.640 [2024-11-27 09:53:40.630486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 [2024-11-27 09:53:40.690440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:39.640 [2024-11-27 09:53:40.690607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.899 BaseBdev2 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.899 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.900 [ 00:16:39.900 { 00:16:39.900 "name": "BaseBdev2", 00:16:39.900 "aliases": [ 00:16:39.900 "b5b82999-a3d5-44a3-a721-a347e767a280" 00:16:39.900 ], 00:16:39.900 "product_name": "Malloc disk", 00:16:39.900 "block_size": 512, 00:16:39.900 "num_blocks": 65536, 00:16:39.900 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:39.900 "assigned_rate_limits": { 00:16:39.900 "rw_ios_per_sec": 0, 00:16:39.900 "rw_mbytes_per_sec": 0, 00:16:39.900 "r_mbytes_per_sec": 0, 00:16:39.900 "w_mbytes_per_sec": 0 00:16:39.900 }, 00:16:39.900 "claimed": false, 00:16:39.900 "zoned": false, 00:16:39.900 "supported_io_types": { 00:16:39.900 "read": true, 00:16:39.900 "write": true, 00:16:39.900 "unmap": true, 00:16:39.900 "flush": true, 00:16:39.900 "reset": true, 00:16:39.900 "nvme_admin": false, 00:16:39.900 "nvme_io": false, 00:16:39.900 "nvme_io_md": false, 00:16:39.900 "write_zeroes": true, 00:16:39.900 "zcopy": true, 00:16:39.900 "get_zone_info": false, 00:16:39.900 "zone_management": false, 00:16:39.900 "zone_append": false, 00:16:39.900 "compare": false, 00:16:39.900 "compare_and_write": false, 00:16:39.900 "abort": true, 00:16:39.900 "seek_hole": false, 00:16:39.900 "seek_data": false, 00:16:39.900 "copy": true, 00:16:39.900 "nvme_iov_md": false 00:16:39.900 }, 00:16:39.900 "memory_domains": [ 00:16:39.900 { 00:16:39.900 "dma_device_id": "system", 00:16:39.900 "dma_device_type": 1 00:16:39.900 }, 00:16:39.900 { 00:16:39.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.900 "dma_device_type": 2 00:16:39.900 } 00:16:39.900 ], 00:16:39.900 "driver_specific": {} 00:16:39.900 } 00:16:39.900 ] 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.900 BaseBdev3 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.900 09:53:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.900 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.900 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:39.900 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.900 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.900 [ 00:16:39.900 { 00:16:39.900 "name": "BaseBdev3", 00:16:39.900 "aliases": [ 00:16:39.900 "90194280-b5f0-4031-8982-e16869f2598d" 00:16:39.900 ], 00:16:39.900 "product_name": "Malloc disk", 00:16:39.900 "block_size": 512, 00:16:39.900 "num_blocks": 65536, 00:16:39.900 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:39.900 "assigned_rate_limits": { 00:16:39.900 "rw_ios_per_sec": 0, 00:16:39.900 "rw_mbytes_per_sec": 0, 00:16:39.900 "r_mbytes_per_sec": 0, 00:16:39.900 "w_mbytes_per_sec": 0 00:16:39.900 }, 00:16:39.900 "claimed": false, 00:16:39.900 "zoned": false, 00:16:39.900 "supported_io_types": { 00:16:39.900 "read": true, 00:16:39.900 "write": true, 00:16:39.900 "unmap": true, 00:16:39.900 "flush": true, 00:16:39.900 "reset": true, 00:16:39.900 "nvme_admin": false, 00:16:39.900 "nvme_io": false, 00:16:39.900 "nvme_io_md": false, 00:16:39.900 "write_zeroes": true, 00:16:39.900 "zcopy": true, 00:16:39.900 "get_zone_info": false, 00:16:39.900 "zone_management": false, 00:16:39.900 "zone_append": false, 00:16:39.900 "compare": false, 00:16:39.900 "compare_and_write": false, 00:16:39.900 "abort": true, 00:16:39.900 "seek_hole": false, 00:16:39.900 "seek_data": false, 00:16:39.900 "copy": true, 00:16:39.900 "nvme_iov_md": false 00:16:39.900 }, 00:16:39.900 "memory_domains": [ 00:16:39.900 { 00:16:39.900 "dma_device_id": "system", 00:16:39.900 "dma_device_type": 1 00:16:39.900 }, 00:16:39.900 { 00:16:39.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.900 "dma_device_type": 2 00:16:39.900 } 00:16:39.900 ], 00:16:39.900 "driver_specific": {} 00:16:39.900 } 00:16:40.160 ] 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.160 [2024-11-27 09:53:41.037567] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:40.160 [2024-11-27 09:53:41.037703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:40.160 [2024-11-27 09:53:41.037765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:40.160 [2024-11-27 09:53:41.040146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.160 "name": "Existed_Raid", 00:16:40.160 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:40.160 "strip_size_kb": 64, 00:16:40.160 "state": "configuring", 00:16:40.160 "raid_level": "raid5f", 00:16:40.160 "superblock": true, 00:16:40.160 "num_base_bdevs": 3, 00:16:40.160 "num_base_bdevs_discovered": 2, 00:16:40.160 "num_base_bdevs_operational": 3, 00:16:40.160 "base_bdevs_list": [ 00:16:40.160 { 00:16:40.160 "name": "BaseBdev1", 00:16:40.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.160 "is_configured": false, 00:16:40.160 "data_offset": 0, 00:16:40.160 "data_size": 0 00:16:40.160 }, 00:16:40.160 { 00:16:40.160 "name": "BaseBdev2", 00:16:40.160 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:40.160 "is_configured": true, 00:16:40.160 "data_offset": 2048, 00:16:40.160 "data_size": 63488 00:16:40.160 }, 00:16:40.160 { 00:16:40.160 "name": "BaseBdev3", 00:16:40.160 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:40.160 "is_configured": true, 00:16:40.160 "data_offset": 2048, 00:16:40.160 "data_size": 63488 00:16:40.160 } 00:16:40.160 ] 00:16:40.160 }' 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.160 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.420 [2024-11-27 09:53:41.456987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.420 "name": "Existed_Raid", 00:16:40.420 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:40.420 "strip_size_kb": 64, 00:16:40.420 "state": "configuring", 00:16:40.420 "raid_level": "raid5f", 00:16:40.420 "superblock": true, 00:16:40.420 "num_base_bdevs": 3, 00:16:40.420 "num_base_bdevs_discovered": 1, 00:16:40.420 "num_base_bdevs_operational": 3, 00:16:40.420 "base_bdevs_list": [ 00:16:40.420 { 00:16:40.420 "name": "BaseBdev1", 00:16:40.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.420 "is_configured": false, 00:16:40.420 "data_offset": 0, 00:16:40.420 "data_size": 0 00:16:40.420 }, 00:16:40.420 { 00:16:40.420 "name": null, 00:16:40.420 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:40.420 "is_configured": false, 00:16:40.420 "data_offset": 0, 00:16:40.420 "data_size": 63488 00:16:40.420 }, 00:16:40.420 { 00:16:40.420 "name": "BaseBdev3", 00:16:40.420 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:40.420 "is_configured": true, 00:16:40.420 "data_offset": 2048, 00:16:40.420 "data_size": 63488 00:16:40.420 } 00:16:40.420 ] 00:16:40.420 }' 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.420 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.989 [2024-11-27 09:53:41.976376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.989 BaseBdev1 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.989 09:53:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.989 [ 00:16:40.989 { 00:16:40.989 "name": "BaseBdev1", 00:16:40.989 "aliases": [ 00:16:40.989 "da7f049a-e037-47bb-aca6-9c1f6874781b" 00:16:40.989 ], 00:16:40.989 "product_name": "Malloc disk", 00:16:40.989 "block_size": 512, 00:16:40.989 "num_blocks": 65536, 00:16:40.989 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:40.989 "assigned_rate_limits": { 00:16:40.989 "rw_ios_per_sec": 0, 00:16:40.989 "rw_mbytes_per_sec": 0, 00:16:40.989 "r_mbytes_per_sec": 0, 00:16:40.989 "w_mbytes_per_sec": 0 00:16:40.989 }, 00:16:40.989 "claimed": true, 00:16:40.989 "claim_type": "exclusive_write", 00:16:40.989 "zoned": false, 00:16:40.989 "supported_io_types": { 00:16:40.989 "read": true, 00:16:40.989 "write": true, 00:16:40.989 "unmap": true, 00:16:40.989 "flush": true, 00:16:40.989 "reset": true, 00:16:40.989 "nvme_admin": false, 00:16:40.989 "nvme_io": false, 00:16:40.989 "nvme_io_md": false, 00:16:40.989 "write_zeroes": true, 00:16:40.989 "zcopy": true, 00:16:40.989 "get_zone_info": false, 00:16:40.989 "zone_management": false, 00:16:40.989 "zone_append": false, 00:16:40.989 "compare": false, 00:16:40.989 "compare_and_write": false, 00:16:40.989 "abort": true, 00:16:40.989 "seek_hole": false, 00:16:40.989 "seek_data": false, 00:16:40.989 "copy": true, 00:16:40.989 "nvme_iov_md": false 00:16:40.989 }, 00:16:40.989 "memory_domains": [ 00:16:40.989 { 00:16:40.989 "dma_device_id": "system", 00:16:40.989 "dma_device_type": 1 00:16:40.989 }, 00:16:40.989 { 00:16:40.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.989 "dma_device_type": 2 00:16:40.989 } 00:16:40.989 ], 00:16:40.989 "driver_specific": {} 00:16:40.989 } 00:16:40.989 ] 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.989 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.990 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.990 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.990 "name": "Existed_Raid", 00:16:40.990 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:40.990 "strip_size_kb": 64, 00:16:40.990 "state": "configuring", 00:16:40.990 "raid_level": "raid5f", 00:16:40.990 "superblock": true, 00:16:40.990 "num_base_bdevs": 3, 00:16:40.990 "num_base_bdevs_discovered": 2, 00:16:40.990 "num_base_bdevs_operational": 3, 00:16:40.990 "base_bdevs_list": [ 00:16:40.990 { 00:16:40.990 "name": "BaseBdev1", 00:16:40.990 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:40.990 "is_configured": true, 00:16:40.990 "data_offset": 2048, 00:16:40.990 "data_size": 63488 00:16:40.990 }, 00:16:40.990 { 00:16:40.990 "name": null, 00:16:40.990 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:40.990 "is_configured": false, 00:16:40.990 "data_offset": 0, 00:16:40.990 "data_size": 63488 00:16:40.990 }, 00:16:40.990 { 00:16:40.990 "name": "BaseBdev3", 00:16:40.990 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:40.990 "is_configured": true, 00:16:40.990 "data_offset": 2048, 00:16:40.990 "data_size": 63488 00:16:40.990 } 00:16:40.990 ] 00:16:40.990 }' 00:16:40.990 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.990 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.558 [2024-11-27 09:53:42.479673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.558 "name": "Existed_Raid", 00:16:41.558 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:41.558 "strip_size_kb": 64, 00:16:41.558 "state": "configuring", 00:16:41.558 "raid_level": "raid5f", 00:16:41.558 "superblock": true, 00:16:41.558 "num_base_bdevs": 3, 00:16:41.558 "num_base_bdevs_discovered": 1, 00:16:41.558 "num_base_bdevs_operational": 3, 00:16:41.558 "base_bdevs_list": [ 00:16:41.558 { 00:16:41.558 "name": "BaseBdev1", 00:16:41.558 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:41.558 "is_configured": true, 00:16:41.558 "data_offset": 2048, 00:16:41.558 "data_size": 63488 00:16:41.558 }, 00:16:41.558 { 00:16:41.558 "name": null, 00:16:41.558 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:41.558 "is_configured": false, 00:16:41.558 "data_offset": 0, 00:16:41.558 "data_size": 63488 00:16:41.558 }, 00:16:41.558 { 00:16:41.558 "name": null, 00:16:41.558 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:41.558 "is_configured": false, 00:16:41.558 "data_offset": 0, 00:16:41.558 "data_size": 63488 00:16:41.558 } 00:16:41.558 ] 00:16:41.558 }' 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.558 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.817 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.817 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.817 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.817 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:41.817 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.076 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:42.076 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:42.076 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.076 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.077 [2024-11-27 09:53:42.954966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.077 "name": "Existed_Raid", 00:16:42.077 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:42.077 "strip_size_kb": 64, 00:16:42.077 "state": "configuring", 00:16:42.077 "raid_level": "raid5f", 00:16:42.077 "superblock": true, 00:16:42.077 "num_base_bdevs": 3, 00:16:42.077 "num_base_bdevs_discovered": 2, 00:16:42.077 "num_base_bdevs_operational": 3, 00:16:42.077 "base_bdevs_list": [ 00:16:42.077 { 00:16:42.077 "name": "BaseBdev1", 00:16:42.077 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:42.077 "is_configured": true, 00:16:42.077 "data_offset": 2048, 00:16:42.077 "data_size": 63488 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "name": null, 00:16:42.077 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:42.077 "is_configured": false, 00:16:42.077 "data_offset": 0, 00:16:42.077 "data_size": 63488 00:16:42.077 }, 00:16:42.077 { 00:16:42.077 "name": "BaseBdev3", 00:16:42.077 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:42.077 "is_configured": true, 00:16:42.077 "data_offset": 2048, 00:16:42.077 "data_size": 63488 00:16:42.077 } 00:16:42.077 ] 00:16:42.077 }' 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.077 09:53:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.335 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.335 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.336 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.336 [2024-11-27 09:53:43.414241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.595 "name": "Existed_Raid", 00:16:42.595 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:42.595 "strip_size_kb": 64, 00:16:42.595 "state": "configuring", 00:16:42.595 "raid_level": "raid5f", 00:16:42.595 "superblock": true, 00:16:42.595 "num_base_bdevs": 3, 00:16:42.595 "num_base_bdevs_discovered": 1, 00:16:42.595 "num_base_bdevs_operational": 3, 00:16:42.595 "base_bdevs_list": [ 00:16:42.595 { 00:16:42.595 "name": null, 00:16:42.595 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:42.595 "is_configured": false, 00:16:42.595 "data_offset": 0, 00:16:42.595 "data_size": 63488 00:16:42.595 }, 00:16:42.595 { 00:16:42.595 "name": null, 00:16:42.595 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:42.595 "is_configured": false, 00:16:42.595 "data_offset": 0, 00:16:42.595 "data_size": 63488 00:16:42.595 }, 00:16:42.595 { 00:16:42.595 "name": "BaseBdev3", 00:16:42.595 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:42.595 "is_configured": true, 00:16:42.595 "data_offset": 2048, 00:16:42.595 "data_size": 63488 00:16:42.595 } 00:16:42.595 ] 00:16:42.595 }' 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.595 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.854 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:42.854 09:53:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.854 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.854 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.854 09:53:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.113 [2024-11-27 09:53:44.014426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.113 "name": "Existed_Raid", 00:16:43.113 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:43.113 "strip_size_kb": 64, 00:16:43.113 "state": "configuring", 00:16:43.113 "raid_level": "raid5f", 00:16:43.113 "superblock": true, 00:16:43.113 "num_base_bdevs": 3, 00:16:43.113 "num_base_bdevs_discovered": 2, 00:16:43.113 "num_base_bdevs_operational": 3, 00:16:43.113 "base_bdevs_list": [ 00:16:43.113 { 00:16:43.113 "name": null, 00:16:43.113 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:43.113 "is_configured": false, 00:16:43.113 "data_offset": 0, 00:16:43.113 "data_size": 63488 00:16:43.113 }, 00:16:43.113 { 00:16:43.113 "name": "BaseBdev2", 00:16:43.113 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:43.113 "is_configured": true, 00:16:43.113 "data_offset": 2048, 00:16:43.113 "data_size": 63488 00:16:43.113 }, 00:16:43.113 { 00:16:43.113 "name": "BaseBdev3", 00:16:43.113 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:43.113 "is_configured": true, 00:16:43.113 "data_offset": 2048, 00:16:43.113 "data_size": 63488 00:16:43.113 } 00:16:43.113 ] 00:16:43.113 }' 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.113 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.372 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u da7f049a-e037-47bb-aca6-9c1f6874781b 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.632 [2024-11-27 09:53:44.577065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:43.632 NewBaseBdev 00:16:43.632 [2024-11-27 09:53:44.577471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:43.632 [2024-11-27 09:53:44.577497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:43.632 [2024-11-27 09:53:44.577809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.632 [2024-11-27 09:53:44.583603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:43.632 [2024-11-27 09:53:44.583676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:43.632 [2024-11-27 09:53:44.583975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.632 [ 00:16:43.632 { 00:16:43.632 "name": "NewBaseBdev", 00:16:43.632 "aliases": [ 00:16:43.632 "da7f049a-e037-47bb-aca6-9c1f6874781b" 00:16:43.632 ], 00:16:43.632 "product_name": "Malloc disk", 00:16:43.632 "block_size": 512, 00:16:43.632 "num_blocks": 65536, 00:16:43.632 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:43.632 "assigned_rate_limits": { 00:16:43.632 "rw_ios_per_sec": 0, 00:16:43.632 "rw_mbytes_per_sec": 0, 00:16:43.632 "r_mbytes_per_sec": 0, 00:16:43.632 "w_mbytes_per_sec": 0 00:16:43.632 }, 00:16:43.632 "claimed": true, 00:16:43.632 "claim_type": "exclusive_write", 00:16:43.632 "zoned": false, 00:16:43.632 "supported_io_types": { 00:16:43.632 "read": true, 00:16:43.632 "write": true, 00:16:43.632 "unmap": true, 00:16:43.632 "flush": true, 00:16:43.632 "reset": true, 00:16:43.632 "nvme_admin": false, 00:16:43.632 "nvme_io": false, 00:16:43.632 "nvme_io_md": false, 00:16:43.632 "write_zeroes": true, 00:16:43.632 "zcopy": true, 00:16:43.632 "get_zone_info": false, 00:16:43.632 "zone_management": false, 00:16:43.632 "zone_append": false, 00:16:43.632 "compare": false, 00:16:43.632 "compare_and_write": false, 00:16:43.632 "abort": true, 00:16:43.632 "seek_hole": false, 00:16:43.632 "seek_data": false, 00:16:43.632 "copy": true, 00:16:43.632 "nvme_iov_md": false 00:16:43.632 }, 00:16:43.632 "memory_domains": [ 00:16:43.632 { 00:16:43.632 "dma_device_id": "system", 00:16:43.632 "dma_device_type": 1 00:16:43.632 }, 00:16:43.632 { 00:16:43.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.632 "dma_device_type": 2 00:16:43.632 } 00:16:43.632 ], 00:16:43.632 "driver_specific": {} 00:16:43.632 } 00:16:43.632 ] 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:43.632 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.633 "name": "Existed_Raid", 00:16:43.633 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:43.633 "strip_size_kb": 64, 00:16:43.633 "state": "online", 00:16:43.633 "raid_level": "raid5f", 00:16:43.633 "superblock": true, 00:16:43.633 "num_base_bdevs": 3, 00:16:43.633 "num_base_bdevs_discovered": 3, 00:16:43.633 "num_base_bdevs_operational": 3, 00:16:43.633 "base_bdevs_list": [ 00:16:43.633 { 00:16:43.633 "name": "NewBaseBdev", 00:16:43.633 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:43.633 "is_configured": true, 00:16:43.633 "data_offset": 2048, 00:16:43.633 "data_size": 63488 00:16:43.633 }, 00:16:43.633 { 00:16:43.633 "name": "BaseBdev2", 00:16:43.633 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:43.633 "is_configured": true, 00:16:43.633 "data_offset": 2048, 00:16:43.633 "data_size": 63488 00:16:43.633 }, 00:16:43.633 { 00:16:43.633 "name": "BaseBdev3", 00:16:43.633 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:43.633 "is_configured": true, 00:16:43.633 "data_offset": 2048, 00:16:43.633 "data_size": 63488 00:16:43.633 } 00:16:43.633 ] 00:16:43.633 }' 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.633 09:53:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.201 [2024-11-27 09:53:45.075015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:44.201 "name": "Existed_Raid", 00:16:44.201 "aliases": [ 00:16:44.201 "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a" 00:16:44.201 ], 00:16:44.201 "product_name": "Raid Volume", 00:16:44.201 "block_size": 512, 00:16:44.201 "num_blocks": 126976, 00:16:44.201 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:44.201 "assigned_rate_limits": { 00:16:44.201 "rw_ios_per_sec": 0, 00:16:44.201 "rw_mbytes_per_sec": 0, 00:16:44.201 "r_mbytes_per_sec": 0, 00:16:44.201 "w_mbytes_per_sec": 0 00:16:44.201 }, 00:16:44.201 "claimed": false, 00:16:44.201 "zoned": false, 00:16:44.201 "supported_io_types": { 00:16:44.201 "read": true, 00:16:44.201 "write": true, 00:16:44.201 "unmap": false, 00:16:44.201 "flush": false, 00:16:44.201 "reset": true, 00:16:44.201 "nvme_admin": false, 00:16:44.201 "nvme_io": false, 00:16:44.201 "nvme_io_md": false, 00:16:44.201 "write_zeroes": true, 00:16:44.201 "zcopy": false, 00:16:44.201 "get_zone_info": false, 00:16:44.201 "zone_management": false, 00:16:44.201 "zone_append": false, 00:16:44.201 "compare": false, 00:16:44.201 "compare_and_write": false, 00:16:44.201 "abort": false, 00:16:44.201 "seek_hole": false, 00:16:44.201 "seek_data": false, 00:16:44.201 "copy": false, 00:16:44.201 "nvme_iov_md": false 00:16:44.201 }, 00:16:44.201 "driver_specific": { 00:16:44.201 "raid": { 00:16:44.201 "uuid": "3ab29cbf-fcd5-42cb-9b38-cac7fcd3a40a", 00:16:44.201 "strip_size_kb": 64, 00:16:44.201 "state": "online", 00:16:44.201 "raid_level": "raid5f", 00:16:44.201 "superblock": true, 00:16:44.201 "num_base_bdevs": 3, 00:16:44.201 "num_base_bdevs_discovered": 3, 00:16:44.201 "num_base_bdevs_operational": 3, 00:16:44.201 "base_bdevs_list": [ 00:16:44.201 { 00:16:44.201 "name": "NewBaseBdev", 00:16:44.201 "uuid": "da7f049a-e037-47bb-aca6-9c1f6874781b", 00:16:44.201 "is_configured": true, 00:16:44.201 "data_offset": 2048, 00:16:44.201 "data_size": 63488 00:16:44.201 }, 00:16:44.201 { 00:16:44.201 "name": "BaseBdev2", 00:16:44.201 "uuid": "b5b82999-a3d5-44a3-a721-a347e767a280", 00:16:44.201 "is_configured": true, 00:16:44.201 "data_offset": 2048, 00:16:44.201 "data_size": 63488 00:16:44.201 }, 00:16:44.201 { 00:16:44.201 "name": "BaseBdev3", 00:16:44.201 "uuid": "90194280-b5f0-4031-8982-e16869f2598d", 00:16:44.201 "is_configured": true, 00:16:44.201 "data_offset": 2048, 00:16:44.201 "data_size": 63488 00:16:44.201 } 00:16:44.201 ] 00:16:44.201 } 00:16:44.201 } 00:16:44.201 }' 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:44.201 BaseBdev2 00:16:44.201 BaseBdev3' 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.201 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.202 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.460 [2024-11-27 09:53:45.354300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.460 [2024-11-27 09:53:45.354430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.460 [2024-11-27 09:53:45.354595] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.460 [2024-11-27 09:53:45.354970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.460 [2024-11-27 09:53:45.355063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80818 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80818 ']' 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80818 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80818 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80818' 00:16:44.460 killing process with pid 80818 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80818 00:16:44.460 [2024-11-27 09:53:45.408122] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.460 09:53:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80818 00:16:44.719 [2024-11-27 09:53:45.741963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.099 09:53:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:46.099 00:16:46.099 real 0m10.820s 00:16:46.099 user 0m16.741s 00:16:46.099 sys 0m2.201s 00:16:46.099 09:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.099 ************************************ 00:16:46.099 END TEST raid5f_state_function_test_sb 00:16:46.099 ************************************ 00:16:46.099 09:53:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.099 09:53:47 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:46.099 09:53:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:46.099 09:53:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.099 09:53:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.099 ************************************ 00:16:46.099 START TEST raid5f_superblock_test 00:16:46.099 ************************************ 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81442 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81442 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81442 ']' 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.099 09:53:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.099 [2024-11-27 09:53:47.165436] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:16:46.099 [2024-11-27 09:53:47.165720] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81442 ] 00:16:46.358 [2024-11-27 09:53:47.346071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.618 [2024-11-27 09:53:47.489580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.618 [2024-11-27 09:53:47.738776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.618 [2024-11-27 09:53:47.738853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.191 malloc1 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.191 [2024-11-27 09:53:48.075016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.191 [2024-11-27 09:53:48.075189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.191 [2024-11-27 09:53:48.075241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:47.191 [2024-11-27 09:53:48.075276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.191 [2024-11-27 09:53:48.078035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.191 [2024-11-27 09:53:48.078134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.191 pt1 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.191 malloc2 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.191 [2024-11-27 09:53:48.141522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:47.191 [2024-11-27 09:53:48.141676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.191 [2024-11-27 09:53:48.141750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:47.191 [2024-11-27 09:53:48.141794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.191 [2024-11-27 09:53:48.144466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.191 [2024-11-27 09:53:48.144566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:47.191 pt2 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:47.191 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.192 malloc3 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.192 [2024-11-27 09:53:48.218192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:47.192 [2024-11-27 09:53:48.218341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.192 [2024-11-27 09:53:48.218393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:47.192 [2024-11-27 09:53:48.218436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.192 [2024-11-27 09:53:48.221133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.192 [2024-11-27 09:53:48.221224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:47.192 pt3 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.192 [2024-11-27 09:53:48.230271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.192 [2024-11-27 09:53:48.232589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:47.192 [2024-11-27 09:53:48.232731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:47.192 [2024-11-27 09:53:48.232987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:47.192 [2024-11-27 09:53:48.233072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:47.192 [2024-11-27 09:53:48.233419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:47.192 [2024-11-27 09:53:48.239339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:47.192 [2024-11-27 09:53:48.239404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:47.192 [2024-11-27 09:53:48.239779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.192 "name": "raid_bdev1", 00:16:47.192 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:47.192 "strip_size_kb": 64, 00:16:47.192 "state": "online", 00:16:47.192 "raid_level": "raid5f", 00:16:47.192 "superblock": true, 00:16:47.192 "num_base_bdevs": 3, 00:16:47.192 "num_base_bdevs_discovered": 3, 00:16:47.192 "num_base_bdevs_operational": 3, 00:16:47.192 "base_bdevs_list": [ 00:16:47.192 { 00:16:47.192 "name": "pt1", 00:16:47.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.192 "is_configured": true, 00:16:47.192 "data_offset": 2048, 00:16:47.192 "data_size": 63488 00:16:47.192 }, 00:16:47.192 { 00:16:47.192 "name": "pt2", 00:16:47.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.192 "is_configured": true, 00:16:47.192 "data_offset": 2048, 00:16:47.192 "data_size": 63488 00:16:47.192 }, 00:16:47.192 { 00:16:47.192 "name": "pt3", 00:16:47.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.192 "is_configured": true, 00:16:47.192 "data_offset": 2048, 00:16:47.192 "data_size": 63488 00:16:47.192 } 00:16:47.192 ] 00:16:47.192 }' 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.192 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.788 [2024-11-27 09:53:48.694815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:47.788 "name": "raid_bdev1", 00:16:47.788 "aliases": [ 00:16:47.788 "8fe160fd-6bf4-441d-a8e6-b25d910091fc" 00:16:47.788 ], 00:16:47.788 "product_name": "Raid Volume", 00:16:47.788 "block_size": 512, 00:16:47.788 "num_blocks": 126976, 00:16:47.788 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:47.788 "assigned_rate_limits": { 00:16:47.788 "rw_ios_per_sec": 0, 00:16:47.788 "rw_mbytes_per_sec": 0, 00:16:47.788 "r_mbytes_per_sec": 0, 00:16:47.788 "w_mbytes_per_sec": 0 00:16:47.788 }, 00:16:47.788 "claimed": false, 00:16:47.788 "zoned": false, 00:16:47.788 "supported_io_types": { 00:16:47.788 "read": true, 00:16:47.788 "write": true, 00:16:47.788 "unmap": false, 00:16:47.788 "flush": false, 00:16:47.788 "reset": true, 00:16:47.788 "nvme_admin": false, 00:16:47.788 "nvme_io": false, 00:16:47.788 "nvme_io_md": false, 00:16:47.788 "write_zeroes": true, 00:16:47.788 "zcopy": false, 00:16:47.788 "get_zone_info": false, 00:16:47.788 "zone_management": false, 00:16:47.788 "zone_append": false, 00:16:47.788 "compare": false, 00:16:47.788 "compare_and_write": false, 00:16:47.788 "abort": false, 00:16:47.788 "seek_hole": false, 00:16:47.788 "seek_data": false, 00:16:47.788 "copy": false, 00:16:47.788 "nvme_iov_md": false 00:16:47.788 }, 00:16:47.788 "driver_specific": { 00:16:47.788 "raid": { 00:16:47.788 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:47.788 "strip_size_kb": 64, 00:16:47.788 "state": "online", 00:16:47.788 "raid_level": "raid5f", 00:16:47.788 "superblock": true, 00:16:47.788 "num_base_bdevs": 3, 00:16:47.788 "num_base_bdevs_discovered": 3, 00:16:47.788 "num_base_bdevs_operational": 3, 00:16:47.788 "base_bdevs_list": [ 00:16:47.788 { 00:16:47.788 "name": "pt1", 00:16:47.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:47.788 "is_configured": true, 00:16:47.788 "data_offset": 2048, 00:16:47.788 "data_size": 63488 00:16:47.788 }, 00:16:47.788 { 00:16:47.788 "name": "pt2", 00:16:47.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:47.788 "is_configured": true, 00:16:47.788 "data_offset": 2048, 00:16:47.788 "data_size": 63488 00:16:47.788 }, 00:16:47.788 { 00:16:47.788 "name": "pt3", 00:16:47.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:47.788 "is_configured": true, 00:16:47.788 "data_offset": 2048, 00:16:47.788 "data_size": 63488 00:16:47.788 } 00:16:47.788 ] 00:16:47.788 } 00:16:47.788 } 00:16:47.788 }' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:47.788 pt2 00:16:47.788 pt3' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:47.788 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:48.048 [2024-11-27 09:53:48.954423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=8fe160fd-6bf4-441d-a8e6-b25d910091fc 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 8fe160fd-6bf4-441d-a8e6-b25d910091fc ']' 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.048 09:53:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.048 [2024-11-27 09:53:49.002111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.048 [2024-11-27 09:53:49.002246] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.048 [2024-11-27 09:53:49.002422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.048 [2024-11-27 09:53:49.002544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.048 [2024-11-27 09:53:49.002604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:48.048 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.049 [2024-11-27 09:53:49.149929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:48.049 [2024-11-27 09:53:49.152365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:48.049 [2024-11-27 09:53:49.152438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:48.049 [2024-11-27 09:53:49.152520] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:48.049 [2024-11-27 09:53:49.152590] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:48.049 [2024-11-27 09:53:49.152613] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:48.049 [2024-11-27 09:53:49.152634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.049 [2024-11-27 09:53:49.152647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:48.049 request: 00:16:48.049 { 00:16:48.049 "name": "raid_bdev1", 00:16:48.049 "raid_level": "raid5f", 00:16:48.049 "base_bdevs": [ 00:16:48.049 "malloc1", 00:16:48.049 "malloc2", 00:16:48.049 "malloc3" 00:16:48.049 ], 00:16:48.049 "strip_size_kb": 64, 00:16:48.049 "superblock": false, 00:16:48.049 "method": "bdev_raid_create", 00:16:48.049 "req_id": 1 00:16:48.049 } 00:16:48.049 Got JSON-RPC error response 00:16:48.049 response: 00:16:48.049 { 00:16:48.049 "code": -17, 00:16:48.049 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:48.049 } 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:48.049 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.308 [2024-11-27 09:53:49.213751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.308 [2024-11-27 09:53:49.213932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.308 [2024-11-27 09:53:49.213981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:48.308 [2024-11-27 09:53:49.214059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.308 [2024-11-27 09:53:49.216886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.308 [2024-11-27 09:53:49.217010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.308 [2024-11-27 09:53:49.217196] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:48.308 [2024-11-27 09:53:49.217305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.308 pt1 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.308 "name": "raid_bdev1", 00:16:48.308 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:48.308 "strip_size_kb": 64, 00:16:48.308 "state": "configuring", 00:16:48.308 "raid_level": "raid5f", 00:16:48.308 "superblock": true, 00:16:48.308 "num_base_bdevs": 3, 00:16:48.308 "num_base_bdevs_discovered": 1, 00:16:48.308 "num_base_bdevs_operational": 3, 00:16:48.308 "base_bdevs_list": [ 00:16:48.308 { 00:16:48.308 "name": "pt1", 00:16:48.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.308 "is_configured": true, 00:16:48.308 "data_offset": 2048, 00:16:48.308 "data_size": 63488 00:16:48.308 }, 00:16:48.308 { 00:16:48.308 "name": null, 00:16:48.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.308 "is_configured": false, 00:16:48.308 "data_offset": 2048, 00:16:48.308 "data_size": 63488 00:16:48.308 }, 00:16:48.308 { 00:16:48.308 "name": null, 00:16:48.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.308 "is_configured": false, 00:16:48.308 "data_offset": 2048, 00:16:48.308 "data_size": 63488 00:16:48.308 } 00:16:48.308 ] 00:16:48.308 }' 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.308 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.567 [2024-11-27 09:53:49.688925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.567 [2024-11-27 09:53:49.689122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.567 [2024-11-27 09:53:49.689178] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:48.567 [2024-11-27 09:53:49.689222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.567 [2024-11-27 09:53:49.689832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.567 [2024-11-27 09:53:49.689918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.567 [2024-11-27 09:53:49.690117] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:48.567 [2024-11-27 09:53:49.690200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.567 pt2 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.567 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.826 [2024-11-27 09:53:49.700907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.826 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.826 "name": "raid_bdev1", 00:16:48.826 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:48.826 "strip_size_kb": 64, 00:16:48.826 "state": "configuring", 00:16:48.826 "raid_level": "raid5f", 00:16:48.826 "superblock": true, 00:16:48.826 "num_base_bdevs": 3, 00:16:48.826 "num_base_bdevs_discovered": 1, 00:16:48.826 "num_base_bdevs_operational": 3, 00:16:48.826 "base_bdevs_list": [ 00:16:48.826 { 00:16:48.827 "name": "pt1", 00:16:48.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.827 "is_configured": true, 00:16:48.827 "data_offset": 2048, 00:16:48.827 "data_size": 63488 00:16:48.827 }, 00:16:48.827 { 00:16:48.827 "name": null, 00:16:48.827 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.827 "is_configured": false, 00:16:48.827 "data_offset": 0, 00:16:48.827 "data_size": 63488 00:16:48.827 }, 00:16:48.827 { 00:16:48.827 "name": null, 00:16:48.827 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:48.827 "is_configured": false, 00:16:48.827 "data_offset": 2048, 00:16:48.827 "data_size": 63488 00:16:48.827 } 00:16:48.827 ] 00:16:48.827 }' 00:16:48.827 09:53:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.827 09:53:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.086 [2024-11-27 09:53:50.140133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.086 [2024-11-27 09:53:50.140316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.086 [2024-11-27 09:53:50.140361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:49.086 [2024-11-27 09:53:50.140419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.086 [2024-11-27 09:53:50.141104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.086 [2024-11-27 09:53:50.141191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.086 [2024-11-27 09:53:50.141364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.086 [2024-11-27 09:53:50.141440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.086 pt2 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.086 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.086 [2024-11-27 09:53:50.152122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:49.086 [2024-11-27 09:53:50.152242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.086 [2024-11-27 09:53:50.152283] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:49.086 [2024-11-27 09:53:50.152319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.086 [2024-11-27 09:53:50.152907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.086 [2024-11-27 09:53:50.153011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:49.086 [2024-11-27 09:53:50.153172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:49.086 [2024-11-27 09:53:50.153245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:49.086 [2024-11-27 09:53:50.153470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:49.086 [2024-11-27 09:53:50.153527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:49.086 [2024-11-27 09:53:50.153850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:49.086 [2024-11-27 09:53:50.159185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:49.086 [2024-11-27 09:53:50.159252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:49.086 [2024-11-27 09:53:50.159558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.086 pt3 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.087 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.087 "name": "raid_bdev1", 00:16:49.087 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:49.087 "strip_size_kb": 64, 00:16:49.087 "state": "online", 00:16:49.087 "raid_level": "raid5f", 00:16:49.087 "superblock": true, 00:16:49.087 "num_base_bdevs": 3, 00:16:49.087 "num_base_bdevs_discovered": 3, 00:16:49.087 "num_base_bdevs_operational": 3, 00:16:49.087 "base_bdevs_list": [ 00:16:49.087 { 00:16:49.087 "name": "pt1", 00:16:49.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.087 "is_configured": true, 00:16:49.087 "data_offset": 2048, 00:16:49.087 "data_size": 63488 00:16:49.087 }, 00:16:49.087 { 00:16:49.087 "name": "pt2", 00:16:49.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.087 "is_configured": true, 00:16:49.087 "data_offset": 2048, 00:16:49.087 "data_size": 63488 00:16:49.087 }, 00:16:49.087 { 00:16:49.087 "name": "pt3", 00:16:49.087 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.087 "is_configured": true, 00:16:49.087 "data_offset": 2048, 00:16:49.087 "data_size": 63488 00:16:49.087 } 00:16:49.087 ] 00:16:49.087 }' 00:16:49.347 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.347 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.607 [2024-11-27 09:53:50.618746] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.607 "name": "raid_bdev1", 00:16:49.607 "aliases": [ 00:16:49.607 "8fe160fd-6bf4-441d-a8e6-b25d910091fc" 00:16:49.607 ], 00:16:49.607 "product_name": "Raid Volume", 00:16:49.607 "block_size": 512, 00:16:49.607 "num_blocks": 126976, 00:16:49.607 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:49.607 "assigned_rate_limits": { 00:16:49.607 "rw_ios_per_sec": 0, 00:16:49.607 "rw_mbytes_per_sec": 0, 00:16:49.607 "r_mbytes_per_sec": 0, 00:16:49.607 "w_mbytes_per_sec": 0 00:16:49.607 }, 00:16:49.607 "claimed": false, 00:16:49.607 "zoned": false, 00:16:49.607 "supported_io_types": { 00:16:49.607 "read": true, 00:16:49.607 "write": true, 00:16:49.607 "unmap": false, 00:16:49.607 "flush": false, 00:16:49.607 "reset": true, 00:16:49.607 "nvme_admin": false, 00:16:49.607 "nvme_io": false, 00:16:49.607 "nvme_io_md": false, 00:16:49.607 "write_zeroes": true, 00:16:49.607 "zcopy": false, 00:16:49.607 "get_zone_info": false, 00:16:49.607 "zone_management": false, 00:16:49.607 "zone_append": false, 00:16:49.607 "compare": false, 00:16:49.607 "compare_and_write": false, 00:16:49.607 "abort": false, 00:16:49.607 "seek_hole": false, 00:16:49.607 "seek_data": false, 00:16:49.607 "copy": false, 00:16:49.607 "nvme_iov_md": false 00:16:49.607 }, 00:16:49.607 "driver_specific": { 00:16:49.607 "raid": { 00:16:49.607 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:49.607 "strip_size_kb": 64, 00:16:49.607 "state": "online", 00:16:49.607 "raid_level": "raid5f", 00:16:49.607 "superblock": true, 00:16:49.607 "num_base_bdevs": 3, 00:16:49.607 "num_base_bdevs_discovered": 3, 00:16:49.607 "num_base_bdevs_operational": 3, 00:16:49.607 "base_bdevs_list": [ 00:16:49.607 { 00:16:49.607 "name": "pt1", 00:16:49.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.607 "is_configured": true, 00:16:49.607 "data_offset": 2048, 00:16:49.607 "data_size": 63488 00:16:49.607 }, 00:16:49.607 { 00:16:49.607 "name": "pt2", 00:16:49.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.607 "is_configured": true, 00:16:49.607 "data_offset": 2048, 00:16:49.607 "data_size": 63488 00:16:49.607 }, 00:16:49.607 { 00:16:49.607 "name": "pt3", 00:16:49.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:49.607 "is_configured": true, 00:16:49.607 "data_offset": 2048, 00:16:49.607 "data_size": 63488 00:16:49.607 } 00:16:49.607 ] 00:16:49.607 } 00:16:49.607 } 00:16:49.607 }' 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:49.607 pt2 00:16:49.607 pt3' 00:16:49.607 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.867 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:49.867 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.867 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:49.867 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 [2024-11-27 09:53:50.910263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 8fe160fd-6bf4-441d-a8e6-b25d910091fc '!=' 8fe160fd-6bf4-441d-a8e6-b25d910091fc ']' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 [2024-11-27 09:53:50.958060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 09:53:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.128 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.128 "name": "raid_bdev1", 00:16:50.128 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:50.128 "strip_size_kb": 64, 00:16:50.128 "state": "online", 00:16:50.128 "raid_level": "raid5f", 00:16:50.128 "superblock": true, 00:16:50.128 "num_base_bdevs": 3, 00:16:50.128 "num_base_bdevs_discovered": 2, 00:16:50.128 "num_base_bdevs_operational": 2, 00:16:50.128 "base_bdevs_list": [ 00:16:50.128 { 00:16:50.128 "name": null, 00:16:50.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.128 "is_configured": false, 00:16:50.128 "data_offset": 0, 00:16:50.128 "data_size": 63488 00:16:50.128 }, 00:16:50.128 { 00:16:50.128 "name": "pt2", 00:16:50.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.128 "is_configured": true, 00:16:50.128 "data_offset": 2048, 00:16:50.128 "data_size": 63488 00:16:50.128 }, 00:16:50.128 { 00:16:50.128 "name": "pt3", 00:16:50.128 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.128 "is_configured": true, 00:16:50.128 "data_offset": 2048, 00:16:50.128 "data_size": 63488 00:16:50.128 } 00:16:50.128 ] 00:16:50.128 }' 00:16:50.128 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.128 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.387 [2024-11-27 09:53:51.397211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.387 [2024-11-27 09:53:51.397350] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.387 [2024-11-27 09:53:51.397495] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.387 [2024-11-27 09:53:51.397611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.387 [2024-11-27 09:53:51.397695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.387 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.388 [2024-11-27 09:53:51.485030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.388 [2024-11-27 09:53:51.485182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.388 [2024-11-27 09:53:51.485225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:50.388 [2024-11-27 09:53:51.485287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.388 [2024-11-27 09:53:51.488122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.388 [2024-11-27 09:53:51.488218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.388 [2024-11-27 09:53:51.488402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.388 [2024-11-27 09:53:51.488516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.388 pt2 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.388 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.647 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.647 "name": "raid_bdev1", 00:16:50.647 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:50.647 "strip_size_kb": 64, 00:16:50.647 "state": "configuring", 00:16:50.647 "raid_level": "raid5f", 00:16:50.647 "superblock": true, 00:16:50.647 "num_base_bdevs": 3, 00:16:50.647 "num_base_bdevs_discovered": 1, 00:16:50.647 "num_base_bdevs_operational": 2, 00:16:50.647 "base_bdevs_list": [ 00:16:50.647 { 00:16:50.647 "name": null, 00:16:50.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.647 "is_configured": false, 00:16:50.647 "data_offset": 2048, 00:16:50.647 "data_size": 63488 00:16:50.647 }, 00:16:50.647 { 00:16:50.647 "name": "pt2", 00:16:50.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.647 "is_configured": true, 00:16:50.647 "data_offset": 2048, 00:16:50.647 "data_size": 63488 00:16:50.647 }, 00:16:50.647 { 00:16:50.647 "name": null, 00:16:50.647 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.647 "is_configured": false, 00:16:50.647 "data_offset": 2048, 00:16:50.647 "data_size": 63488 00:16:50.647 } 00:16:50.647 ] 00:16:50.647 }' 00:16:50.647 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.647 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.906 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:50.906 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:50.906 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:50.906 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.906 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.907 [2024-11-27 09:53:51.968190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.907 [2024-11-27 09:53:51.968391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.907 [2024-11-27 09:53:51.968444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:50.907 [2024-11-27 09:53:51.968498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.907 [2024-11-27 09:53:51.969187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.907 [2024-11-27 09:53:51.969280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.907 [2024-11-27 09:53:51.969443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:50.907 [2024-11-27 09:53:51.969519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.907 [2024-11-27 09:53:51.969702] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:50.907 [2024-11-27 09:53:51.969751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:50.907 [2024-11-27 09:53:51.970103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:50.907 [2024-11-27 09:53:51.975702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:50.907 [2024-11-27 09:53:51.975775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:50.907 [2024-11-27 09:53:51.976251] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.907 pt3 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.907 09:53:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.907 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.907 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.907 "name": "raid_bdev1", 00:16:50.907 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:50.907 "strip_size_kb": 64, 00:16:50.907 "state": "online", 00:16:50.907 "raid_level": "raid5f", 00:16:50.907 "superblock": true, 00:16:50.907 "num_base_bdevs": 3, 00:16:50.907 "num_base_bdevs_discovered": 2, 00:16:50.907 "num_base_bdevs_operational": 2, 00:16:50.907 "base_bdevs_list": [ 00:16:50.907 { 00:16:50.907 "name": null, 00:16:50.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.907 "is_configured": false, 00:16:50.907 "data_offset": 2048, 00:16:50.907 "data_size": 63488 00:16:50.907 }, 00:16:50.907 { 00:16:50.907 "name": "pt2", 00:16:50.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.907 "is_configured": true, 00:16:50.907 "data_offset": 2048, 00:16:50.907 "data_size": 63488 00:16:50.907 }, 00:16:50.907 { 00:16:50.907 "name": "pt3", 00:16:50.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:50.907 "is_configured": true, 00:16:50.907 "data_offset": 2048, 00:16:50.907 "data_size": 63488 00:16:50.907 } 00:16:50.907 ] 00:16:50.907 }' 00:16:50.907 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.907 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.475 [2024-11-27 09:53:52.407604] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.475 [2024-11-27 09:53:52.407737] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.475 [2024-11-27 09:53:52.407910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.475 [2024-11-27 09:53:52.408052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.475 [2024-11-27 09:53:52.408122] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.475 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.475 [2024-11-27 09:53:52.463565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.475 [2024-11-27 09:53:52.463665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.475 [2024-11-27 09:53:52.463693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:51.475 [2024-11-27 09:53:52.463706] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.475 [2024-11-27 09:53:52.466672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.475 [2024-11-27 09:53:52.466721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.475 [2024-11-27 09:53:52.466854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.475 [2024-11-27 09:53:52.466918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.475 [2024-11-27 09:53:52.467136] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:51.475 [2024-11-27 09:53:52.467151] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.475 [2024-11-27 09:53:52.467173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:51.475 pt1 00:16:51.476 [2024-11-27 09:53:52.467250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.476 "name": "raid_bdev1", 00:16:51.476 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:51.476 "strip_size_kb": 64, 00:16:51.476 "state": "configuring", 00:16:51.476 "raid_level": "raid5f", 00:16:51.476 "superblock": true, 00:16:51.476 "num_base_bdevs": 3, 00:16:51.476 "num_base_bdevs_discovered": 1, 00:16:51.476 "num_base_bdevs_operational": 2, 00:16:51.476 "base_bdevs_list": [ 00:16:51.476 { 00:16:51.476 "name": null, 00:16:51.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.476 "is_configured": false, 00:16:51.476 "data_offset": 2048, 00:16:51.476 "data_size": 63488 00:16:51.476 }, 00:16:51.476 { 00:16:51.476 "name": "pt2", 00:16:51.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.476 "is_configured": true, 00:16:51.476 "data_offset": 2048, 00:16:51.476 "data_size": 63488 00:16:51.476 }, 00:16:51.476 { 00:16:51.476 "name": null, 00:16:51.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:51.476 "is_configured": false, 00:16:51.476 "data_offset": 2048, 00:16:51.476 "data_size": 63488 00:16:51.476 } 00:16:51.476 ] 00:16:51.476 }' 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.476 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.046 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.047 [2024-11-27 09:53:52.978726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:52.047 [2024-11-27 09:53:52.978926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.047 [2024-11-27 09:53:52.978967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:52.047 [2024-11-27 09:53:52.978982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.047 [2024-11-27 09:53:52.979700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.047 [2024-11-27 09:53:52.979729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:52.047 [2024-11-27 09:53:52.979861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:52.047 [2024-11-27 09:53:52.979896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:52.047 [2024-11-27 09:53:52.980107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:52.047 [2024-11-27 09:53:52.980121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:52.047 [2024-11-27 09:53:52.980456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:52.047 pt3 00:16:52.047 [2024-11-27 09:53:52.986287] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:52.047 [2024-11-27 09:53:52.986326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:52.047 [2024-11-27 09:53:52.986679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.047 09:53:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.047 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.047 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.047 "name": "raid_bdev1", 00:16:52.047 "uuid": "8fe160fd-6bf4-441d-a8e6-b25d910091fc", 00:16:52.047 "strip_size_kb": 64, 00:16:52.047 "state": "online", 00:16:52.047 "raid_level": "raid5f", 00:16:52.047 "superblock": true, 00:16:52.047 "num_base_bdevs": 3, 00:16:52.047 "num_base_bdevs_discovered": 2, 00:16:52.047 "num_base_bdevs_operational": 2, 00:16:52.047 "base_bdevs_list": [ 00:16:52.047 { 00:16:52.047 "name": null, 00:16:52.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.047 "is_configured": false, 00:16:52.047 "data_offset": 2048, 00:16:52.047 "data_size": 63488 00:16:52.047 }, 00:16:52.047 { 00:16:52.047 "name": "pt2", 00:16:52.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:52.047 "is_configured": true, 00:16:52.047 "data_offset": 2048, 00:16:52.047 "data_size": 63488 00:16:52.047 }, 00:16:52.047 { 00:16:52.047 "name": "pt3", 00:16:52.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:52.047 "is_configured": true, 00:16:52.047 "data_offset": 2048, 00:16:52.047 "data_size": 63488 00:16:52.047 } 00:16:52.047 ] 00:16:52.047 }' 00:16:52.047 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.047 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.306 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:52.306 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.306 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.306 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.566 [2024-11-27 09:53:53.494121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 8fe160fd-6bf4-441d-a8e6-b25d910091fc '!=' 8fe160fd-6bf4-441d-a8e6-b25d910091fc ']' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81442 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81442 ']' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81442 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81442 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.566 killing process with pid 81442 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81442' 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81442 00:16:52.566 [2024-11-27 09:53:53.567439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:52.566 09:53:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81442 00:16:52.566 [2024-11-27 09:53:53.567592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.566 [2024-11-27 09:53:53.567680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.566 [2024-11-27 09:53:53.567697] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:52.825 [2024-11-27 09:53:53.901991] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.205 09:53:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:54.205 00:16:54.205 real 0m8.084s 00:16:54.205 user 0m12.297s 00:16:54.205 sys 0m1.690s 00:16:54.205 ************************************ 00:16:54.205 END TEST raid5f_superblock_test 00:16:54.205 ************************************ 00:16:54.205 09:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.205 09:53:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.205 09:53:55 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:54.205 09:53:55 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:54.205 09:53:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:54.205 09:53:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.205 09:53:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.205 ************************************ 00:16:54.205 START TEST raid5f_rebuild_test 00:16:54.205 ************************************ 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81886 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81886 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81886 ']' 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.205 09:53:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.205 [2024-11-27 09:53:55.331453] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:16:54.205 [2024-11-27 09:53:55.331713] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:54.205 Zero copy mechanism will not be used. 00:16:54.205 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81886 ] 00:16:54.465 [2024-11-27 09:53:55.514094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.724 [2024-11-27 09:53:55.657401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.983 [2024-11-27 09:53:55.899698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.983 [2024-11-27 09:53:55.899872] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.243 BaseBdev1_malloc 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.243 [2024-11-27 09:53:56.237395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:55.243 [2024-11-27 09:53:56.237563] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.243 [2024-11-27 09:53:56.237616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:55.243 [2024-11-27 09:53:56.237659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.243 [2024-11-27 09:53:56.240324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.243 [2024-11-27 09:53:56.240425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.243 BaseBdev1 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.243 BaseBdev2_malloc 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.243 [2024-11-27 09:53:56.297455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:55.243 [2024-11-27 09:53:56.297619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.243 [2024-11-27 09:53:56.297675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:55.243 [2024-11-27 09:53:56.297721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.243 [2024-11-27 09:53:56.300379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.243 [2024-11-27 09:53:56.300473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:55.243 BaseBdev2 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.243 BaseBdev3_malloc 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.243 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 [2024-11-27 09:53:56.375024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:55.502 [2024-11-27 09:53:56.375199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.502 [2024-11-27 09:53:56.375284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:55.502 [2024-11-27 09:53:56.375329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.502 [2024-11-27 09:53:56.377972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.502 [2024-11-27 09:53:56.378084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:55.502 BaseBdev3 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 spare_malloc 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 spare_delay 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 [2024-11-27 09:53:56.447228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:55.502 [2024-11-27 09:53:56.447383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.502 [2024-11-27 09:53:56.447430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:55.502 [2024-11-27 09:53:56.447473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.502 [2024-11-27 09:53:56.450167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.502 [2024-11-27 09:53:56.450266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:55.502 spare 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 [2024-11-27 09:53:56.459316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.502 [2024-11-27 09:53:56.461612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.502 [2024-11-27 09:53:56.461749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.502 [2024-11-27 09:53:56.461888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:55.502 [2024-11-27 09:53:56.461938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:55.502 [2024-11-27 09:53:56.462342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:55.502 [2024-11-27 09:53:56.467913] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:55.502 [2024-11-27 09:53:56.467987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:55.502 [2024-11-27 09:53:56.468351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.502 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.502 "name": "raid_bdev1", 00:16:55.502 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:16:55.502 "strip_size_kb": 64, 00:16:55.502 "state": "online", 00:16:55.502 "raid_level": "raid5f", 00:16:55.502 "superblock": false, 00:16:55.502 "num_base_bdevs": 3, 00:16:55.502 "num_base_bdevs_discovered": 3, 00:16:55.502 "num_base_bdevs_operational": 3, 00:16:55.502 "base_bdevs_list": [ 00:16:55.502 { 00:16:55.502 "name": "BaseBdev1", 00:16:55.502 "uuid": "654a28b9-f525-550b-9ae2-0db33b472df5", 00:16:55.502 "is_configured": true, 00:16:55.502 "data_offset": 0, 00:16:55.502 "data_size": 65536 00:16:55.503 }, 00:16:55.503 { 00:16:55.503 "name": "BaseBdev2", 00:16:55.503 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:16:55.503 "is_configured": true, 00:16:55.503 "data_offset": 0, 00:16:55.503 "data_size": 65536 00:16:55.503 }, 00:16:55.503 { 00:16:55.503 "name": "BaseBdev3", 00:16:55.503 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:16:55.503 "is_configured": true, 00:16:55.503 "data_offset": 0, 00:16:55.503 "data_size": 65536 00:16:55.503 } 00:16:55.503 ] 00:16:55.503 }' 00:16:55.503 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.503 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 [2024-11-27 09:53:56.911419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.072 09:53:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:56.072 [2024-11-27 09:53:57.178780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:56.331 /dev/nbd0 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:56.331 1+0 records in 00:16:56.331 1+0 records out 00:16:56.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415875 s, 9.8 MB/s 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:56.331 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:56.332 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:56.591 512+0 records in 00:16:56.591 512+0 records out 00:16:56.591 67108864 bytes (67 MB, 64 MiB) copied, 0.395396 s, 170 MB/s 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:56.591 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:56.850 [2024-11-27 09:53:57.868217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.850 [2024-11-27 09:53:57.905928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.850 "name": "raid_bdev1", 00:16:56.850 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:16:56.850 "strip_size_kb": 64, 00:16:56.850 "state": "online", 00:16:56.850 "raid_level": "raid5f", 00:16:56.850 "superblock": false, 00:16:56.850 "num_base_bdevs": 3, 00:16:56.850 "num_base_bdevs_discovered": 2, 00:16:56.850 "num_base_bdevs_operational": 2, 00:16:56.850 "base_bdevs_list": [ 00:16:56.850 { 00:16:56.850 "name": null, 00:16:56.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.850 "is_configured": false, 00:16:56.850 "data_offset": 0, 00:16:56.850 "data_size": 65536 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "name": "BaseBdev2", 00:16:56.850 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:16:56.850 "is_configured": true, 00:16:56.850 "data_offset": 0, 00:16:56.850 "data_size": 65536 00:16:56.850 }, 00:16:56.850 { 00:16:56.850 "name": "BaseBdev3", 00:16:56.850 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:16:56.850 "is_configured": true, 00:16:56.850 "data_offset": 0, 00:16:56.850 "data_size": 65536 00:16:56.850 } 00:16:56.850 ] 00:16:56.850 }' 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.850 09:53:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.419 09:53:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:57.419 09:53:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.419 09:53:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.419 [2024-11-27 09:53:58.333182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:57.419 [2024-11-27 09:53:58.351718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:57.419 09:53:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.419 09:53:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:57.419 [2024-11-27 09:53:58.360674] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.357 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.358 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.358 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.358 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.358 "name": "raid_bdev1", 00:16:58.358 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:16:58.358 "strip_size_kb": 64, 00:16:58.358 "state": "online", 00:16:58.358 "raid_level": "raid5f", 00:16:58.358 "superblock": false, 00:16:58.358 "num_base_bdevs": 3, 00:16:58.358 "num_base_bdevs_discovered": 3, 00:16:58.358 "num_base_bdevs_operational": 3, 00:16:58.358 "process": { 00:16:58.358 "type": "rebuild", 00:16:58.358 "target": "spare", 00:16:58.358 "progress": { 00:16:58.358 "blocks": 18432, 00:16:58.358 "percent": 14 00:16:58.358 } 00:16:58.358 }, 00:16:58.358 "base_bdevs_list": [ 00:16:58.358 { 00:16:58.358 "name": "spare", 00:16:58.358 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:16:58.358 "is_configured": true, 00:16:58.358 "data_offset": 0, 00:16:58.358 "data_size": 65536 00:16:58.358 }, 00:16:58.358 { 00:16:58.358 "name": "BaseBdev2", 00:16:58.358 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:16:58.358 "is_configured": true, 00:16:58.358 "data_offset": 0, 00:16:58.358 "data_size": 65536 00:16:58.358 }, 00:16:58.358 { 00:16:58.358 "name": "BaseBdev3", 00:16:58.358 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:16:58.358 "is_configured": true, 00:16:58.358 "data_offset": 0, 00:16:58.358 "data_size": 65536 00:16:58.358 } 00:16:58.358 ] 00:16:58.358 }' 00:16:58.358 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.358 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.358 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.617 [2024-11-27 09:53:59.520991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.617 [2024-11-27 09:53:59.576946] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:58.617 [2024-11-27 09:53:59.577221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.617 [2024-11-27 09:53:59.577290] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:58.617 [2024-11-27 09:53:59.577304] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.617 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.618 "name": "raid_bdev1", 00:16:58.618 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:16:58.618 "strip_size_kb": 64, 00:16:58.618 "state": "online", 00:16:58.618 "raid_level": "raid5f", 00:16:58.618 "superblock": false, 00:16:58.618 "num_base_bdevs": 3, 00:16:58.618 "num_base_bdevs_discovered": 2, 00:16:58.618 "num_base_bdevs_operational": 2, 00:16:58.618 "base_bdevs_list": [ 00:16:58.618 { 00:16:58.618 "name": null, 00:16:58.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.618 "is_configured": false, 00:16:58.618 "data_offset": 0, 00:16:58.618 "data_size": 65536 00:16:58.618 }, 00:16:58.618 { 00:16:58.618 "name": "BaseBdev2", 00:16:58.618 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:16:58.618 "is_configured": true, 00:16:58.618 "data_offset": 0, 00:16:58.618 "data_size": 65536 00:16:58.618 }, 00:16:58.618 { 00:16:58.618 "name": "BaseBdev3", 00:16:58.618 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:16:58.618 "is_configured": true, 00:16:58.618 "data_offset": 0, 00:16:58.618 "data_size": 65536 00:16:58.618 } 00:16:58.618 ] 00:16:58.618 }' 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.618 09:53:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.188 "name": "raid_bdev1", 00:16:59.188 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:16:59.188 "strip_size_kb": 64, 00:16:59.188 "state": "online", 00:16:59.188 "raid_level": "raid5f", 00:16:59.188 "superblock": false, 00:16:59.188 "num_base_bdevs": 3, 00:16:59.188 "num_base_bdevs_discovered": 2, 00:16:59.188 "num_base_bdevs_operational": 2, 00:16:59.188 "base_bdevs_list": [ 00:16:59.188 { 00:16:59.188 "name": null, 00:16:59.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.188 "is_configured": false, 00:16:59.188 "data_offset": 0, 00:16:59.188 "data_size": 65536 00:16:59.188 }, 00:16:59.188 { 00:16:59.188 "name": "BaseBdev2", 00:16:59.188 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:16:59.188 "is_configured": true, 00:16:59.188 "data_offset": 0, 00:16:59.188 "data_size": 65536 00:16:59.188 }, 00:16:59.188 { 00:16:59.188 "name": "BaseBdev3", 00:16:59.188 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:16:59.188 "is_configured": true, 00:16:59.188 "data_offset": 0, 00:16:59.188 "data_size": 65536 00:16:59.188 } 00:16:59.188 ] 00:16:59.188 }' 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.188 [2024-11-27 09:54:00.197188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:59.188 [2024-11-27 09:54:00.215405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.188 09:54:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:59.188 [2024-11-27 09:54:00.223967] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.124 09:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.389 "name": "raid_bdev1", 00:17:00.389 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:00.389 "strip_size_kb": 64, 00:17:00.389 "state": "online", 00:17:00.389 "raid_level": "raid5f", 00:17:00.389 "superblock": false, 00:17:00.389 "num_base_bdevs": 3, 00:17:00.389 "num_base_bdevs_discovered": 3, 00:17:00.389 "num_base_bdevs_operational": 3, 00:17:00.389 "process": { 00:17:00.389 "type": "rebuild", 00:17:00.389 "target": "spare", 00:17:00.389 "progress": { 00:17:00.389 "blocks": 18432, 00:17:00.389 "percent": 14 00:17:00.389 } 00:17:00.389 }, 00:17:00.389 "base_bdevs_list": [ 00:17:00.389 { 00:17:00.389 "name": "spare", 00:17:00.389 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:00.389 "is_configured": true, 00:17:00.389 "data_offset": 0, 00:17:00.389 "data_size": 65536 00:17:00.389 }, 00:17:00.389 { 00:17:00.389 "name": "BaseBdev2", 00:17:00.389 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:00.389 "is_configured": true, 00:17:00.389 "data_offset": 0, 00:17:00.389 "data_size": 65536 00:17:00.389 }, 00:17:00.389 { 00:17:00.389 "name": "BaseBdev3", 00:17:00.389 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:00.389 "is_configured": true, 00:17:00.389 "data_offset": 0, 00:17:00.389 "data_size": 65536 00:17:00.389 } 00:17:00.389 ] 00:17:00.389 }' 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:00.389 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=556 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.390 "name": "raid_bdev1", 00:17:00.390 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:00.390 "strip_size_kb": 64, 00:17:00.390 "state": "online", 00:17:00.390 "raid_level": "raid5f", 00:17:00.390 "superblock": false, 00:17:00.390 "num_base_bdevs": 3, 00:17:00.390 "num_base_bdevs_discovered": 3, 00:17:00.390 "num_base_bdevs_operational": 3, 00:17:00.390 "process": { 00:17:00.390 "type": "rebuild", 00:17:00.390 "target": "spare", 00:17:00.390 "progress": { 00:17:00.390 "blocks": 22528, 00:17:00.390 "percent": 17 00:17:00.390 } 00:17:00.390 }, 00:17:00.390 "base_bdevs_list": [ 00:17:00.390 { 00:17:00.390 "name": "spare", 00:17:00.390 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:00.390 "is_configured": true, 00:17:00.390 "data_offset": 0, 00:17:00.390 "data_size": 65536 00:17:00.390 }, 00:17:00.390 { 00:17:00.390 "name": "BaseBdev2", 00:17:00.390 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:00.390 "is_configured": true, 00:17:00.390 "data_offset": 0, 00:17:00.390 "data_size": 65536 00:17:00.390 }, 00:17:00.390 { 00:17:00.390 "name": "BaseBdev3", 00:17:00.390 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:00.390 "is_configured": true, 00:17:00.390 "data_offset": 0, 00:17:00.390 "data_size": 65536 00:17:00.390 } 00:17:00.390 ] 00:17:00.390 }' 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.390 09:54:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.770 "name": "raid_bdev1", 00:17:01.770 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:01.770 "strip_size_kb": 64, 00:17:01.770 "state": "online", 00:17:01.770 "raid_level": "raid5f", 00:17:01.770 "superblock": false, 00:17:01.770 "num_base_bdevs": 3, 00:17:01.770 "num_base_bdevs_discovered": 3, 00:17:01.770 "num_base_bdevs_operational": 3, 00:17:01.770 "process": { 00:17:01.770 "type": "rebuild", 00:17:01.770 "target": "spare", 00:17:01.770 "progress": { 00:17:01.770 "blocks": 45056, 00:17:01.770 "percent": 34 00:17:01.770 } 00:17:01.770 }, 00:17:01.770 "base_bdevs_list": [ 00:17:01.770 { 00:17:01.770 "name": "spare", 00:17:01.770 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:01.770 "is_configured": true, 00:17:01.770 "data_offset": 0, 00:17:01.770 "data_size": 65536 00:17:01.770 }, 00:17:01.770 { 00:17:01.770 "name": "BaseBdev2", 00:17:01.770 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:01.770 "is_configured": true, 00:17:01.770 "data_offset": 0, 00:17:01.770 "data_size": 65536 00:17:01.770 }, 00:17:01.770 { 00:17:01.770 "name": "BaseBdev3", 00:17:01.770 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:01.770 "is_configured": true, 00:17:01.770 "data_offset": 0, 00:17:01.770 "data_size": 65536 00:17:01.770 } 00:17:01.770 ] 00:17:01.770 }' 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.770 09:54:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.708 "name": "raid_bdev1", 00:17:02.708 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:02.708 "strip_size_kb": 64, 00:17:02.708 "state": "online", 00:17:02.708 "raid_level": "raid5f", 00:17:02.708 "superblock": false, 00:17:02.708 "num_base_bdevs": 3, 00:17:02.708 "num_base_bdevs_discovered": 3, 00:17:02.708 "num_base_bdevs_operational": 3, 00:17:02.708 "process": { 00:17:02.708 "type": "rebuild", 00:17:02.708 "target": "spare", 00:17:02.708 "progress": { 00:17:02.708 "blocks": 67584, 00:17:02.708 "percent": 51 00:17:02.708 } 00:17:02.708 }, 00:17:02.708 "base_bdevs_list": [ 00:17:02.708 { 00:17:02.708 "name": "spare", 00:17:02.708 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:02.708 "is_configured": true, 00:17:02.708 "data_offset": 0, 00:17:02.708 "data_size": 65536 00:17:02.708 }, 00:17:02.708 { 00:17:02.708 "name": "BaseBdev2", 00:17:02.708 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:02.708 "is_configured": true, 00:17:02.708 "data_offset": 0, 00:17:02.708 "data_size": 65536 00:17:02.708 }, 00:17:02.708 { 00:17:02.708 "name": "BaseBdev3", 00:17:02.708 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:02.708 "is_configured": true, 00:17:02.708 "data_offset": 0, 00:17:02.708 "data_size": 65536 00:17:02.708 } 00:17:02.708 ] 00:17:02.708 }' 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.708 09:54:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.090 09:54:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.091 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.091 "name": "raid_bdev1", 00:17:04.091 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:04.091 "strip_size_kb": 64, 00:17:04.091 "state": "online", 00:17:04.091 "raid_level": "raid5f", 00:17:04.091 "superblock": false, 00:17:04.091 "num_base_bdevs": 3, 00:17:04.091 "num_base_bdevs_discovered": 3, 00:17:04.091 "num_base_bdevs_operational": 3, 00:17:04.091 "process": { 00:17:04.091 "type": "rebuild", 00:17:04.091 "target": "spare", 00:17:04.091 "progress": { 00:17:04.091 "blocks": 92160, 00:17:04.091 "percent": 70 00:17:04.091 } 00:17:04.091 }, 00:17:04.091 "base_bdevs_list": [ 00:17:04.091 { 00:17:04.091 "name": "spare", 00:17:04.091 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 0, 00:17:04.091 "data_size": 65536 00:17:04.091 }, 00:17:04.091 { 00:17:04.091 "name": "BaseBdev2", 00:17:04.091 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 0, 00:17:04.091 "data_size": 65536 00:17:04.091 }, 00:17:04.091 { 00:17:04.091 "name": "BaseBdev3", 00:17:04.091 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:04.091 "is_configured": true, 00:17:04.091 "data_offset": 0, 00:17:04.091 "data_size": 65536 00:17:04.091 } 00:17:04.091 ] 00:17:04.091 }' 00:17:04.091 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.091 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.091 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.091 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.091 09:54:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.027 "name": "raid_bdev1", 00:17:05.027 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:05.027 "strip_size_kb": 64, 00:17:05.027 "state": "online", 00:17:05.027 "raid_level": "raid5f", 00:17:05.027 "superblock": false, 00:17:05.027 "num_base_bdevs": 3, 00:17:05.027 "num_base_bdevs_discovered": 3, 00:17:05.027 "num_base_bdevs_operational": 3, 00:17:05.027 "process": { 00:17:05.027 "type": "rebuild", 00:17:05.027 "target": "spare", 00:17:05.027 "progress": { 00:17:05.027 "blocks": 114688, 00:17:05.027 "percent": 87 00:17:05.027 } 00:17:05.027 }, 00:17:05.027 "base_bdevs_list": [ 00:17:05.027 { 00:17:05.027 "name": "spare", 00:17:05.027 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:05.027 "is_configured": true, 00:17:05.027 "data_offset": 0, 00:17:05.027 "data_size": 65536 00:17:05.027 }, 00:17:05.027 { 00:17:05.027 "name": "BaseBdev2", 00:17:05.027 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:05.027 "is_configured": true, 00:17:05.027 "data_offset": 0, 00:17:05.027 "data_size": 65536 00:17:05.027 }, 00:17:05.027 { 00:17:05.027 "name": "BaseBdev3", 00:17:05.027 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:05.027 "is_configured": true, 00:17:05.027 "data_offset": 0, 00:17:05.027 "data_size": 65536 00:17:05.027 } 00:17:05.027 ] 00:17:05.027 }' 00:17:05.027 09:54:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.027 09:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.027 09:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.027 09:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.027 09:54:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.595 [2024-11-27 09:54:06.709143] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:05.595 [2024-11-27 09:54:06.709440] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:05.595 [2024-11-27 09:54:06.709536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.164 "name": "raid_bdev1", 00:17:06.164 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:06.164 "strip_size_kb": 64, 00:17:06.164 "state": "online", 00:17:06.164 "raid_level": "raid5f", 00:17:06.164 "superblock": false, 00:17:06.164 "num_base_bdevs": 3, 00:17:06.164 "num_base_bdevs_discovered": 3, 00:17:06.164 "num_base_bdevs_operational": 3, 00:17:06.164 "base_bdevs_list": [ 00:17:06.164 { 00:17:06.164 "name": "spare", 00:17:06.164 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:06.164 "is_configured": true, 00:17:06.164 "data_offset": 0, 00:17:06.164 "data_size": 65536 00:17:06.164 }, 00:17:06.164 { 00:17:06.164 "name": "BaseBdev2", 00:17:06.164 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:06.164 "is_configured": true, 00:17:06.164 "data_offset": 0, 00:17:06.164 "data_size": 65536 00:17:06.164 }, 00:17:06.164 { 00:17:06.164 "name": "BaseBdev3", 00:17:06.164 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:06.164 "is_configured": true, 00:17:06.164 "data_offset": 0, 00:17:06.164 "data_size": 65536 00:17:06.164 } 00:17:06.164 ] 00:17:06.164 }' 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.164 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.164 "name": "raid_bdev1", 00:17:06.164 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:06.164 "strip_size_kb": 64, 00:17:06.164 "state": "online", 00:17:06.164 "raid_level": "raid5f", 00:17:06.164 "superblock": false, 00:17:06.164 "num_base_bdevs": 3, 00:17:06.164 "num_base_bdevs_discovered": 3, 00:17:06.164 "num_base_bdevs_operational": 3, 00:17:06.164 "base_bdevs_list": [ 00:17:06.164 { 00:17:06.164 "name": "spare", 00:17:06.165 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:06.165 "is_configured": true, 00:17:06.165 "data_offset": 0, 00:17:06.165 "data_size": 65536 00:17:06.165 }, 00:17:06.165 { 00:17:06.165 "name": "BaseBdev2", 00:17:06.165 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:06.165 "is_configured": true, 00:17:06.165 "data_offset": 0, 00:17:06.165 "data_size": 65536 00:17:06.165 }, 00:17:06.165 { 00:17:06.165 "name": "BaseBdev3", 00:17:06.165 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:06.165 "is_configured": true, 00:17:06.165 "data_offset": 0, 00:17:06.165 "data_size": 65536 00:17:06.165 } 00:17:06.165 ] 00:17:06.165 }' 00:17:06.165 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.436 "name": "raid_bdev1", 00:17:06.436 "uuid": "20c67c62-ce57-4f6b-8ecc-5683a3b836cc", 00:17:06.436 "strip_size_kb": 64, 00:17:06.436 "state": "online", 00:17:06.436 "raid_level": "raid5f", 00:17:06.436 "superblock": false, 00:17:06.436 "num_base_bdevs": 3, 00:17:06.436 "num_base_bdevs_discovered": 3, 00:17:06.436 "num_base_bdevs_operational": 3, 00:17:06.436 "base_bdevs_list": [ 00:17:06.436 { 00:17:06.436 "name": "spare", 00:17:06.436 "uuid": "3063d682-6433-5cb3-971a-64c098d02b4e", 00:17:06.436 "is_configured": true, 00:17:06.436 "data_offset": 0, 00:17:06.436 "data_size": 65536 00:17:06.436 }, 00:17:06.436 { 00:17:06.436 "name": "BaseBdev2", 00:17:06.436 "uuid": "8db0bcf8-2d74-57e3-b3f9-8d2b50273273", 00:17:06.436 "is_configured": true, 00:17:06.436 "data_offset": 0, 00:17:06.436 "data_size": 65536 00:17:06.436 }, 00:17:06.436 { 00:17:06.436 "name": "BaseBdev3", 00:17:06.436 "uuid": "f503290a-5419-5c9c-b091-3607dd08e002", 00:17:06.436 "is_configured": true, 00:17:06.436 "data_offset": 0, 00:17:06.436 "data_size": 65536 00:17:06.436 } 00:17:06.436 ] 00:17:06.436 }' 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.436 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.696 [2024-11-27 09:54:07.788515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.696 [2024-11-27 09:54:07.788638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.696 [2024-11-27 09:54:07.788820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.696 [2024-11-27 09:54:07.788984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.696 [2024-11-27 09:54:07.789076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.696 09:54:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:06.955 09:54:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:06.955 /dev/nbd0 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:06.955 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.215 1+0 records in 00:17:07.215 1+0 records out 00:17:07.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457592 s, 9.0 MB/s 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.215 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:07.215 /dev/nbd1 00:17:07.474 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.474 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.474 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:07.474 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:07.474 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.475 1+0 records in 00:17:07.475 1+0 records out 00:17:07.475 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365623 s, 11.2 MB/s 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.475 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.734 09:54:08 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81886 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81886 ']' 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81886 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81886 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.993 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81886' 00:17:07.993 killing process with pid 81886 00:17:07.993 Received shutdown signal, test time was about 60.000000 seconds 00:17:07.993 00:17:07.993 Latency(us) 00:17:07.993 [2024-11-27T09:54:09.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.994 [2024-11-27T09:54:09.127Z] =================================================================================================================== 00:17:07.994 [2024-11-27T09:54:09.127Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.994 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81886 00:17:07.994 [2024-11-27 09:54:09.084796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:07.994 09:54:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81886 00:17:08.565 [2024-11-27 09:54:09.521267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:09.944 09:54:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:09.944 00:17:09.944 real 0m15.527s 00:17:09.944 user 0m18.813s 00:17:09.944 sys 0m2.216s 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 ************************************ 00:17:09.945 END TEST raid5f_rebuild_test 00:17:09.945 ************************************ 00:17:09.945 09:54:10 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:09.945 09:54:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:09.945 09:54:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:09.945 09:54:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 ************************************ 00:17:09.945 START TEST raid5f_rebuild_test_sb 00:17:09.945 ************************************ 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82331 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82331 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82331 ']' 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.945 09:54:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.945 [2024-11-27 09:54:10.932395] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:17:09.945 [2024-11-27 09:54:10.932679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:09.945 Zero copy mechanism will not be used. 00:17:09.945 -allocations --file-prefix=spdk_pid82331 ] 00:17:10.204 [2024-11-27 09:54:11.115181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.204 [2024-11-27 09:54:11.254996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.463 [2024-11-27 09:54:11.493501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.463 [2024-11-27 09:54:11.493610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.722 BaseBdev1_malloc 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.722 [2024-11-27 09:54:11.841045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:10.722 [2024-11-27 09:54:11.841135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.722 [2024-11-27 09:54:11.841170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:10.722 [2024-11-27 09:54:11.841186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.722 [2024-11-27 09:54:11.843856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.722 [2024-11-27 09:54:11.843913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:10.722 BaseBdev1 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.722 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 BaseBdev2_malloc 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 [2024-11-27 09:54:11.904227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:10.982 [2024-11-27 09:54:11.904323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.982 [2024-11-27 09:54:11.904357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:10.982 [2024-11-27 09:54:11.904373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.982 [2024-11-27 09:54:11.907094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.982 [2024-11-27 09:54:11.907142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:10.982 BaseBdev2 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 BaseBdev3_malloc 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 [2024-11-27 09:54:11.979269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:10.982 [2024-11-27 09:54:11.979423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.982 [2024-11-27 09:54:11.979461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:10.982 [2024-11-27 09:54:11.979475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.982 [2024-11-27 09:54:11.982168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.982 [2024-11-27 09:54:11.982216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:10.982 BaseBdev3 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.982 09:54:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 spare_malloc 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 spare_delay 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.982 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.982 [2024-11-27 09:54:12.051920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:10.982 [2024-11-27 09:54:12.052032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.982 [2024-11-27 09:54:12.052064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:10.982 [2024-11-27 09:54:12.052079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.982 [2024-11-27 09:54:12.054787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.982 [2024-11-27 09:54:12.054848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:10.982 spare 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.983 [2024-11-27 09:54:12.064011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.983 [2024-11-27 09:54:12.066293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.983 [2024-11-27 09:54:12.066380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.983 [2024-11-27 09:54:12.066619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:10.983 [2024-11-27 09:54:12.066634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:10.983 [2024-11-27 09:54:12.066976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:10.983 [2024-11-27 09:54:12.073160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:10.983 [2024-11-27 09:54:12.073242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:10.983 [2024-11-27 09:54:12.073601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.983 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.242 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.242 "name": "raid_bdev1", 00:17:11.242 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:11.242 "strip_size_kb": 64, 00:17:11.242 "state": "online", 00:17:11.242 "raid_level": "raid5f", 00:17:11.242 "superblock": true, 00:17:11.242 "num_base_bdevs": 3, 00:17:11.242 "num_base_bdevs_discovered": 3, 00:17:11.242 "num_base_bdevs_operational": 3, 00:17:11.242 "base_bdevs_list": [ 00:17:11.242 { 00:17:11.242 "name": "BaseBdev1", 00:17:11.242 "uuid": "73960698-1eba-5dc2-8a82-b6efbf990ebc", 00:17:11.242 "is_configured": true, 00:17:11.242 "data_offset": 2048, 00:17:11.242 "data_size": 63488 00:17:11.242 }, 00:17:11.242 { 00:17:11.242 "name": "BaseBdev2", 00:17:11.242 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:11.242 "is_configured": true, 00:17:11.242 "data_offset": 2048, 00:17:11.242 "data_size": 63488 00:17:11.242 }, 00:17:11.242 { 00:17:11.242 "name": "BaseBdev3", 00:17:11.242 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:11.242 "is_configured": true, 00:17:11.242 "data_offset": 2048, 00:17:11.242 "data_size": 63488 00:17:11.242 } 00:17:11.242 ] 00:17:11.242 }' 00:17:11.242 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.242 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.502 [2024-11-27 09:54:12.556755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.502 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:11.762 [2024-11-27 09:54:12.844145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:11.762 /dev/nbd0 00:17:11.762 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.022 1+0 records in 00:17:12.022 1+0 records out 00:17:12.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700966 s, 5.8 MB/s 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:12.022 09:54:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:12.281 496+0 records in 00:17:12.281 496+0 records out 00:17:12.281 65011712 bytes (65 MB, 62 MiB) copied, 0.369107 s, 176 MB/s 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.281 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:12.540 [2024-11-27 09:54:13.504099] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.540 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:12.540 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:12.540 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:12.540 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.541 [2024-11-27 09:54:13.541162] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.541 "name": "raid_bdev1", 00:17:12.541 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:12.541 "strip_size_kb": 64, 00:17:12.541 "state": "online", 00:17:12.541 "raid_level": "raid5f", 00:17:12.541 "superblock": true, 00:17:12.541 "num_base_bdevs": 3, 00:17:12.541 "num_base_bdevs_discovered": 2, 00:17:12.541 "num_base_bdevs_operational": 2, 00:17:12.541 "base_bdevs_list": [ 00:17:12.541 { 00:17:12.541 "name": null, 00:17:12.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.541 "is_configured": false, 00:17:12.541 "data_offset": 0, 00:17:12.541 "data_size": 63488 00:17:12.541 }, 00:17:12.541 { 00:17:12.541 "name": "BaseBdev2", 00:17:12.541 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:12.541 "is_configured": true, 00:17:12.541 "data_offset": 2048, 00:17:12.541 "data_size": 63488 00:17:12.541 }, 00:17:12.541 { 00:17:12.541 "name": "BaseBdev3", 00:17:12.541 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:12.541 "is_configured": true, 00:17:12.541 "data_offset": 2048, 00:17:12.541 "data_size": 63488 00:17:12.541 } 00:17:12.541 ] 00:17:12.541 }' 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.541 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.108 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.108 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.108 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.108 [2024-11-27 09:54:13.980405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.108 [2024-11-27 09:54:13.999867] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:13.108 09:54:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.108 09:54:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.108 [2024-11-27 09:54:14.009075] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.045 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.045 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.045 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.045 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.045 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.045 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.046 "name": "raid_bdev1", 00:17:14.046 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:14.046 "strip_size_kb": 64, 00:17:14.046 "state": "online", 00:17:14.046 "raid_level": "raid5f", 00:17:14.046 "superblock": true, 00:17:14.046 "num_base_bdevs": 3, 00:17:14.046 "num_base_bdevs_discovered": 3, 00:17:14.046 "num_base_bdevs_operational": 3, 00:17:14.046 "process": { 00:17:14.046 "type": "rebuild", 00:17:14.046 "target": "spare", 00:17:14.046 "progress": { 00:17:14.046 "blocks": 20480, 00:17:14.046 "percent": 16 00:17:14.046 } 00:17:14.046 }, 00:17:14.046 "base_bdevs_list": [ 00:17:14.046 { 00:17:14.046 "name": "spare", 00:17:14.046 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:14.046 "is_configured": true, 00:17:14.046 "data_offset": 2048, 00:17:14.046 "data_size": 63488 00:17:14.046 }, 00:17:14.046 { 00:17:14.046 "name": "BaseBdev2", 00:17:14.046 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:14.046 "is_configured": true, 00:17:14.046 "data_offset": 2048, 00:17:14.046 "data_size": 63488 00:17:14.046 }, 00:17:14.046 { 00:17:14.046 "name": "BaseBdev3", 00:17:14.046 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:14.046 "is_configured": true, 00:17:14.046 "data_offset": 2048, 00:17:14.046 "data_size": 63488 00:17:14.046 } 00:17:14.046 ] 00:17:14.046 }' 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.046 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.046 [2024-11-27 09:54:15.141406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.306 [2024-11-27 09:54:15.225631] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.306 [2024-11-27 09:54:15.225929] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.306 [2024-11-27 09:54:15.225964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.306 [2024-11-27 09:54:15.225977] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.306 "name": "raid_bdev1", 00:17:14.306 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:14.306 "strip_size_kb": 64, 00:17:14.306 "state": "online", 00:17:14.306 "raid_level": "raid5f", 00:17:14.306 "superblock": true, 00:17:14.306 "num_base_bdevs": 3, 00:17:14.306 "num_base_bdevs_discovered": 2, 00:17:14.306 "num_base_bdevs_operational": 2, 00:17:14.306 "base_bdevs_list": [ 00:17:14.306 { 00:17:14.306 "name": null, 00:17:14.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.306 "is_configured": false, 00:17:14.306 "data_offset": 0, 00:17:14.306 "data_size": 63488 00:17:14.306 }, 00:17:14.306 { 00:17:14.306 "name": "BaseBdev2", 00:17:14.306 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:14.306 "is_configured": true, 00:17:14.306 "data_offset": 2048, 00:17:14.306 "data_size": 63488 00:17:14.306 }, 00:17:14.306 { 00:17:14.306 "name": "BaseBdev3", 00:17:14.306 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:14.306 "is_configured": true, 00:17:14.306 "data_offset": 2048, 00:17:14.306 "data_size": 63488 00:17:14.306 } 00:17:14.306 ] 00:17:14.306 }' 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.306 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.565 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.824 "name": "raid_bdev1", 00:17:14.824 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:14.824 "strip_size_kb": 64, 00:17:14.824 "state": "online", 00:17:14.824 "raid_level": "raid5f", 00:17:14.824 "superblock": true, 00:17:14.824 "num_base_bdevs": 3, 00:17:14.824 "num_base_bdevs_discovered": 2, 00:17:14.824 "num_base_bdevs_operational": 2, 00:17:14.824 "base_bdevs_list": [ 00:17:14.824 { 00:17:14.824 "name": null, 00:17:14.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.824 "is_configured": false, 00:17:14.824 "data_offset": 0, 00:17:14.824 "data_size": 63488 00:17:14.824 }, 00:17:14.824 { 00:17:14.824 "name": "BaseBdev2", 00:17:14.824 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:14.824 "is_configured": true, 00:17:14.824 "data_offset": 2048, 00:17:14.824 "data_size": 63488 00:17:14.824 }, 00:17:14.824 { 00:17:14.824 "name": "BaseBdev3", 00:17:14.824 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:14.824 "is_configured": true, 00:17:14.824 "data_offset": 2048, 00:17:14.824 "data_size": 63488 00:17:14.824 } 00:17:14.824 ] 00:17:14.824 }' 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.824 [2024-11-27 09:54:15.825838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:14.824 [2024-11-27 09:54:15.843635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.824 09:54:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:14.824 [2024-11-27 09:54:15.852307] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.760 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.019 "name": "raid_bdev1", 00:17:16.019 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:16.019 "strip_size_kb": 64, 00:17:16.019 "state": "online", 00:17:16.019 "raid_level": "raid5f", 00:17:16.019 "superblock": true, 00:17:16.019 "num_base_bdevs": 3, 00:17:16.019 "num_base_bdevs_discovered": 3, 00:17:16.019 "num_base_bdevs_operational": 3, 00:17:16.019 "process": { 00:17:16.019 "type": "rebuild", 00:17:16.019 "target": "spare", 00:17:16.019 "progress": { 00:17:16.019 "blocks": 18432, 00:17:16.019 "percent": 14 00:17:16.019 } 00:17:16.019 }, 00:17:16.019 "base_bdevs_list": [ 00:17:16.019 { 00:17:16.019 "name": "spare", 00:17:16.019 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 }, 00:17:16.019 { 00:17:16.019 "name": "BaseBdev2", 00:17:16.019 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 }, 00:17:16.019 { 00:17:16.019 "name": "BaseBdev3", 00:17:16.019 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 } 00:17:16.019 ] 00:17:16.019 }' 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:16.019 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=571 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.019 09:54:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.019 "name": "raid_bdev1", 00:17:16.019 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:16.019 "strip_size_kb": 64, 00:17:16.019 "state": "online", 00:17:16.019 "raid_level": "raid5f", 00:17:16.019 "superblock": true, 00:17:16.019 "num_base_bdevs": 3, 00:17:16.019 "num_base_bdevs_discovered": 3, 00:17:16.019 "num_base_bdevs_operational": 3, 00:17:16.019 "process": { 00:17:16.019 "type": "rebuild", 00:17:16.019 "target": "spare", 00:17:16.019 "progress": { 00:17:16.019 "blocks": 22528, 00:17:16.019 "percent": 17 00:17:16.019 } 00:17:16.019 }, 00:17:16.019 "base_bdevs_list": [ 00:17:16.019 { 00:17:16.019 "name": "spare", 00:17:16.019 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 }, 00:17:16.019 { 00:17:16.019 "name": "BaseBdev2", 00:17:16.019 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 }, 00:17:16.019 { 00:17:16.019 "name": "BaseBdev3", 00:17:16.019 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 } 00:17:16.019 ] 00:17:16.019 }' 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.019 09:54:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.399 "name": "raid_bdev1", 00:17:17.399 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:17.399 "strip_size_kb": 64, 00:17:17.399 "state": "online", 00:17:17.399 "raid_level": "raid5f", 00:17:17.399 "superblock": true, 00:17:17.399 "num_base_bdevs": 3, 00:17:17.399 "num_base_bdevs_discovered": 3, 00:17:17.399 "num_base_bdevs_operational": 3, 00:17:17.399 "process": { 00:17:17.399 "type": "rebuild", 00:17:17.399 "target": "spare", 00:17:17.399 "progress": { 00:17:17.399 "blocks": 45056, 00:17:17.399 "percent": 35 00:17:17.399 } 00:17:17.399 }, 00:17:17.399 "base_bdevs_list": [ 00:17:17.399 { 00:17:17.399 "name": "spare", 00:17:17.399 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:17.399 "is_configured": true, 00:17:17.399 "data_offset": 2048, 00:17:17.399 "data_size": 63488 00:17:17.399 }, 00:17:17.399 { 00:17:17.399 "name": "BaseBdev2", 00:17:17.399 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:17.399 "is_configured": true, 00:17:17.399 "data_offset": 2048, 00:17:17.399 "data_size": 63488 00:17:17.399 }, 00:17:17.399 { 00:17:17.399 "name": "BaseBdev3", 00:17:17.399 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:17.399 "is_configured": true, 00:17:17.399 "data_offset": 2048, 00:17:17.399 "data_size": 63488 00:17:17.399 } 00:17:17.399 ] 00:17:17.399 }' 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.399 09:54:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.336 "name": "raid_bdev1", 00:17:18.336 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:18.336 "strip_size_kb": 64, 00:17:18.336 "state": "online", 00:17:18.336 "raid_level": "raid5f", 00:17:18.336 "superblock": true, 00:17:18.336 "num_base_bdevs": 3, 00:17:18.336 "num_base_bdevs_discovered": 3, 00:17:18.336 "num_base_bdevs_operational": 3, 00:17:18.336 "process": { 00:17:18.336 "type": "rebuild", 00:17:18.336 "target": "spare", 00:17:18.336 "progress": { 00:17:18.336 "blocks": 69632, 00:17:18.336 "percent": 54 00:17:18.336 } 00:17:18.336 }, 00:17:18.336 "base_bdevs_list": [ 00:17:18.336 { 00:17:18.336 "name": "spare", 00:17:18.336 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:18.336 "is_configured": true, 00:17:18.336 "data_offset": 2048, 00:17:18.336 "data_size": 63488 00:17:18.336 }, 00:17:18.336 { 00:17:18.336 "name": "BaseBdev2", 00:17:18.336 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:18.336 "is_configured": true, 00:17:18.336 "data_offset": 2048, 00:17:18.336 "data_size": 63488 00:17:18.336 }, 00:17:18.336 { 00:17:18.336 "name": "BaseBdev3", 00:17:18.336 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:18.336 "is_configured": true, 00:17:18.336 "data_offset": 2048, 00:17:18.336 "data_size": 63488 00:17:18.336 } 00:17:18.336 ] 00:17:18.336 }' 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.336 09:54:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.715 "name": "raid_bdev1", 00:17:19.715 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:19.715 "strip_size_kb": 64, 00:17:19.715 "state": "online", 00:17:19.715 "raid_level": "raid5f", 00:17:19.715 "superblock": true, 00:17:19.715 "num_base_bdevs": 3, 00:17:19.715 "num_base_bdevs_discovered": 3, 00:17:19.715 "num_base_bdevs_operational": 3, 00:17:19.715 "process": { 00:17:19.715 "type": "rebuild", 00:17:19.715 "target": "spare", 00:17:19.715 "progress": { 00:17:19.715 "blocks": 92160, 00:17:19.715 "percent": 72 00:17:19.715 } 00:17:19.715 }, 00:17:19.715 "base_bdevs_list": [ 00:17:19.715 { 00:17:19.715 "name": "spare", 00:17:19.715 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 }, 00:17:19.715 { 00:17:19.715 "name": "BaseBdev2", 00:17:19.715 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 }, 00:17:19.715 { 00:17:19.715 "name": "BaseBdev3", 00:17:19.715 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:19.715 "is_configured": true, 00:17:19.715 "data_offset": 2048, 00:17:19.715 "data_size": 63488 00:17:19.715 } 00:17:19.715 ] 00:17:19.715 }' 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.715 09:54:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.676 "name": "raid_bdev1", 00:17:20.676 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:20.676 "strip_size_kb": 64, 00:17:20.676 "state": "online", 00:17:20.676 "raid_level": "raid5f", 00:17:20.676 "superblock": true, 00:17:20.676 "num_base_bdevs": 3, 00:17:20.676 "num_base_bdevs_discovered": 3, 00:17:20.676 "num_base_bdevs_operational": 3, 00:17:20.676 "process": { 00:17:20.676 "type": "rebuild", 00:17:20.676 "target": "spare", 00:17:20.676 "progress": { 00:17:20.676 "blocks": 114688, 00:17:20.676 "percent": 90 00:17:20.676 } 00:17:20.676 }, 00:17:20.676 "base_bdevs_list": [ 00:17:20.676 { 00:17:20.676 "name": "spare", 00:17:20.676 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:20.676 "is_configured": true, 00:17:20.676 "data_offset": 2048, 00:17:20.676 "data_size": 63488 00:17:20.676 }, 00:17:20.676 { 00:17:20.676 "name": "BaseBdev2", 00:17:20.676 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:20.676 "is_configured": true, 00:17:20.676 "data_offset": 2048, 00:17:20.676 "data_size": 63488 00:17:20.676 }, 00:17:20.676 { 00:17:20.676 "name": "BaseBdev3", 00:17:20.676 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:20.676 "is_configured": true, 00:17:20.676 "data_offset": 2048, 00:17:20.676 "data_size": 63488 00:17:20.676 } 00:17:20.676 ] 00:17:20.676 }' 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:20.676 09:54:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:21.243 [2024-11-27 09:54:22.135450] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:21.243 [2024-11-27 09:54:22.135738] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:21.243 [2024-11-27 09:54:22.135966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.811 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:21.811 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.812 "name": "raid_bdev1", 00:17:21.812 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:21.812 "strip_size_kb": 64, 00:17:21.812 "state": "online", 00:17:21.812 "raid_level": "raid5f", 00:17:21.812 "superblock": true, 00:17:21.812 "num_base_bdevs": 3, 00:17:21.812 "num_base_bdevs_discovered": 3, 00:17:21.812 "num_base_bdevs_operational": 3, 00:17:21.812 "base_bdevs_list": [ 00:17:21.812 { 00:17:21.812 "name": "spare", 00:17:21.812 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:21.812 "is_configured": true, 00:17:21.812 "data_offset": 2048, 00:17:21.812 "data_size": 63488 00:17:21.812 }, 00:17:21.812 { 00:17:21.812 "name": "BaseBdev2", 00:17:21.812 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:21.812 "is_configured": true, 00:17:21.812 "data_offset": 2048, 00:17:21.812 "data_size": 63488 00:17:21.812 }, 00:17:21.812 { 00:17:21.812 "name": "BaseBdev3", 00:17:21.812 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:21.812 "is_configured": true, 00:17:21.812 "data_offset": 2048, 00:17:21.812 "data_size": 63488 00:17:21.812 } 00:17:21.812 ] 00:17:21.812 }' 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.812 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.812 "name": "raid_bdev1", 00:17:21.812 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:21.812 "strip_size_kb": 64, 00:17:21.812 "state": "online", 00:17:21.812 "raid_level": "raid5f", 00:17:21.812 "superblock": true, 00:17:21.812 "num_base_bdevs": 3, 00:17:21.812 "num_base_bdevs_discovered": 3, 00:17:21.812 "num_base_bdevs_operational": 3, 00:17:21.812 "base_bdevs_list": [ 00:17:21.812 { 00:17:21.812 "name": "spare", 00:17:21.812 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:21.812 "is_configured": true, 00:17:21.812 "data_offset": 2048, 00:17:21.812 "data_size": 63488 00:17:21.812 }, 00:17:21.812 { 00:17:21.812 "name": "BaseBdev2", 00:17:21.812 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:21.812 "is_configured": true, 00:17:21.812 "data_offset": 2048, 00:17:21.812 "data_size": 63488 00:17:21.812 }, 00:17:21.812 { 00:17:21.812 "name": "BaseBdev3", 00:17:21.812 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:21.812 "is_configured": true, 00:17:21.812 "data_offset": 2048, 00:17:21.812 "data_size": 63488 00:17:21.812 } 00:17:21.812 ] 00:17:21.812 }' 00:17:22.071 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:22.071 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:22.071 09:54:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.071 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.072 "name": "raid_bdev1", 00:17:22.072 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:22.072 "strip_size_kb": 64, 00:17:22.072 "state": "online", 00:17:22.072 "raid_level": "raid5f", 00:17:22.072 "superblock": true, 00:17:22.072 "num_base_bdevs": 3, 00:17:22.072 "num_base_bdevs_discovered": 3, 00:17:22.072 "num_base_bdevs_operational": 3, 00:17:22.072 "base_bdevs_list": [ 00:17:22.072 { 00:17:22.072 "name": "spare", 00:17:22.072 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:22.072 "is_configured": true, 00:17:22.072 "data_offset": 2048, 00:17:22.072 "data_size": 63488 00:17:22.072 }, 00:17:22.072 { 00:17:22.072 "name": "BaseBdev2", 00:17:22.072 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:22.072 "is_configured": true, 00:17:22.072 "data_offset": 2048, 00:17:22.072 "data_size": 63488 00:17:22.072 }, 00:17:22.072 { 00:17:22.072 "name": "BaseBdev3", 00:17:22.072 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:22.072 "is_configured": true, 00:17:22.072 "data_offset": 2048, 00:17:22.072 "data_size": 63488 00:17:22.072 } 00:17:22.072 ] 00:17:22.072 }' 00:17:22.072 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.072 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.331 [2024-11-27 09:54:23.431174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.331 [2024-11-27 09:54:23.431221] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.331 [2024-11-27 09:54:23.431360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.331 [2024-11-27 09:54:23.431462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.331 [2024-11-27 09:54:23.431482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:22.331 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.590 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:22.590 /dev/nbd0 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.849 1+0 records in 00:17:22.849 1+0 records out 00:17:22.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337223 s, 12.1 MB/s 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:22.849 09:54:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:23.109 /dev/nbd1 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.109 1+0 records in 00:17:23.109 1+0 records out 00:17:23.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696255 s, 5.9 MB/s 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:23.109 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:23.368 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.369 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:23.369 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.628 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.629 [2024-11-27 09:54:24.747389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.629 [2024-11-27 09:54:24.747501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.629 [2024-11-27 09:54:24.747537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:23.629 [2024-11-27 09:54:24.747554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.629 [2024-11-27 09:54:24.750783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.629 [2024-11-27 09:54:24.750925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.629 [2024-11-27 09:54:24.751129] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.629 [2024-11-27 09:54:24.751225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.629 [2024-11-27 09:54:24.751439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.629 [2024-11-27 09:54:24.751644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:23.629 spare 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.629 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.888 [2024-11-27 09:54:24.851609] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:23.888 [2024-11-27 09:54:24.851712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:23.888 [2024-11-27 09:54:24.852197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:23.888 [2024-11-27 09:54:24.857827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:23.888 [2024-11-27 09:54:24.857946] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:23.888 [2024-11-27 09:54:24.858299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.888 "name": "raid_bdev1", 00:17:23.888 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:23.888 "strip_size_kb": 64, 00:17:23.888 "state": "online", 00:17:23.888 "raid_level": "raid5f", 00:17:23.888 "superblock": true, 00:17:23.888 "num_base_bdevs": 3, 00:17:23.888 "num_base_bdevs_discovered": 3, 00:17:23.888 "num_base_bdevs_operational": 3, 00:17:23.888 "base_bdevs_list": [ 00:17:23.888 { 00:17:23.888 "name": "spare", 00:17:23.888 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:23.888 "is_configured": true, 00:17:23.888 "data_offset": 2048, 00:17:23.888 "data_size": 63488 00:17:23.888 }, 00:17:23.888 { 00:17:23.888 "name": "BaseBdev2", 00:17:23.888 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:23.888 "is_configured": true, 00:17:23.888 "data_offset": 2048, 00:17:23.888 "data_size": 63488 00:17:23.888 }, 00:17:23.888 { 00:17:23.888 "name": "BaseBdev3", 00:17:23.888 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:23.888 "is_configured": true, 00:17:23.888 "data_offset": 2048, 00:17:23.888 "data_size": 63488 00:17:23.888 } 00:17:23.888 ] 00:17:23.888 }' 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.888 09:54:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.457 "name": "raid_bdev1", 00:17:24.457 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:24.457 "strip_size_kb": 64, 00:17:24.457 "state": "online", 00:17:24.457 "raid_level": "raid5f", 00:17:24.457 "superblock": true, 00:17:24.457 "num_base_bdevs": 3, 00:17:24.457 "num_base_bdevs_discovered": 3, 00:17:24.457 "num_base_bdevs_operational": 3, 00:17:24.457 "base_bdevs_list": [ 00:17:24.457 { 00:17:24.457 "name": "spare", 00:17:24.457 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:24.457 "is_configured": true, 00:17:24.457 "data_offset": 2048, 00:17:24.457 "data_size": 63488 00:17:24.457 }, 00:17:24.457 { 00:17:24.457 "name": "BaseBdev2", 00:17:24.457 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:24.457 "is_configured": true, 00:17:24.457 "data_offset": 2048, 00:17:24.457 "data_size": 63488 00:17:24.457 }, 00:17:24.457 { 00:17:24.457 "name": "BaseBdev3", 00:17:24.457 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:24.457 "is_configured": true, 00:17:24.457 "data_offset": 2048, 00:17:24.457 "data_size": 63488 00:17:24.457 } 00:17:24.457 ] 00:17:24.457 }' 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.457 [2024-11-27 09:54:25.481141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.457 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.457 "name": "raid_bdev1", 00:17:24.457 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:24.457 "strip_size_kb": 64, 00:17:24.457 "state": "online", 00:17:24.457 "raid_level": "raid5f", 00:17:24.457 "superblock": true, 00:17:24.457 "num_base_bdevs": 3, 00:17:24.457 "num_base_bdevs_discovered": 2, 00:17:24.457 "num_base_bdevs_operational": 2, 00:17:24.457 "base_bdevs_list": [ 00:17:24.457 { 00:17:24.457 "name": null, 00:17:24.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.457 "is_configured": false, 00:17:24.457 "data_offset": 0, 00:17:24.457 "data_size": 63488 00:17:24.457 }, 00:17:24.457 { 00:17:24.457 "name": "BaseBdev2", 00:17:24.457 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:24.457 "is_configured": true, 00:17:24.457 "data_offset": 2048, 00:17:24.457 "data_size": 63488 00:17:24.458 }, 00:17:24.458 { 00:17:24.458 "name": "BaseBdev3", 00:17:24.458 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:24.458 "is_configured": true, 00:17:24.458 "data_offset": 2048, 00:17:24.458 "data_size": 63488 00:17:24.458 } 00:17:24.458 ] 00:17:24.458 }' 00:17:24.458 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.458 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.026 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.026 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.026 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.026 [2024-11-27 09:54:25.896463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.026 [2024-11-27 09:54:25.896852] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.026 [2024-11-27 09:54:25.896947] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:25.026 [2024-11-27 09:54:25.897045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.026 [2024-11-27 09:54:25.914181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:17:25.026 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.026 09:54:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:25.026 [2024-11-27 09:54:25.922611] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.962 "name": "raid_bdev1", 00:17:25.962 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:25.962 "strip_size_kb": 64, 00:17:25.962 "state": "online", 00:17:25.962 "raid_level": "raid5f", 00:17:25.962 "superblock": true, 00:17:25.962 "num_base_bdevs": 3, 00:17:25.962 "num_base_bdevs_discovered": 3, 00:17:25.962 "num_base_bdevs_operational": 3, 00:17:25.962 "process": { 00:17:25.962 "type": "rebuild", 00:17:25.962 "target": "spare", 00:17:25.962 "progress": { 00:17:25.962 "blocks": 20480, 00:17:25.962 "percent": 16 00:17:25.962 } 00:17:25.962 }, 00:17:25.962 "base_bdevs_list": [ 00:17:25.962 { 00:17:25.962 "name": "spare", 00:17:25.962 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:25.962 "is_configured": true, 00:17:25.962 "data_offset": 2048, 00:17:25.962 "data_size": 63488 00:17:25.962 }, 00:17:25.962 { 00:17:25.962 "name": "BaseBdev2", 00:17:25.962 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:25.962 "is_configured": true, 00:17:25.962 "data_offset": 2048, 00:17:25.962 "data_size": 63488 00:17:25.962 }, 00:17:25.962 { 00:17:25.962 "name": "BaseBdev3", 00:17:25.962 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:25.962 "is_configured": true, 00:17:25.962 "data_offset": 2048, 00:17:25.962 "data_size": 63488 00:17:25.962 } 00:17:25.962 ] 00:17:25.962 }' 00:17:25.962 09:54:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.962 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:25.962 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.962 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:25.962 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:25.962 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.962 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.962 [2024-11-27 09:54:27.078923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.221 [2024-11-27 09:54:27.139075] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:26.221 [2024-11-27 09:54:27.139368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.221 [2024-11-27 09:54:27.139394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:26.221 [2024-11-27 09:54:27.139408] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.221 "name": "raid_bdev1", 00:17:26.221 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:26.221 "strip_size_kb": 64, 00:17:26.221 "state": "online", 00:17:26.221 "raid_level": "raid5f", 00:17:26.221 "superblock": true, 00:17:26.221 "num_base_bdevs": 3, 00:17:26.221 "num_base_bdevs_discovered": 2, 00:17:26.221 "num_base_bdevs_operational": 2, 00:17:26.221 "base_bdevs_list": [ 00:17:26.221 { 00:17:26.221 "name": null, 00:17:26.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.221 "is_configured": false, 00:17:26.221 "data_offset": 0, 00:17:26.221 "data_size": 63488 00:17:26.221 }, 00:17:26.221 { 00:17:26.221 "name": "BaseBdev2", 00:17:26.221 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:26.221 "is_configured": true, 00:17:26.221 "data_offset": 2048, 00:17:26.221 "data_size": 63488 00:17:26.221 }, 00:17:26.221 { 00:17:26.221 "name": "BaseBdev3", 00:17:26.221 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:26.221 "is_configured": true, 00:17:26.221 "data_offset": 2048, 00:17:26.221 "data_size": 63488 00:17:26.221 } 00:17:26.221 ] 00:17:26.221 }' 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.221 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.790 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.790 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.790 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:26.790 [2024-11-27 09:54:27.619985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.790 [2024-11-27 09:54:27.620190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.790 [2024-11-27 09:54:27.620232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:17:26.790 [2024-11-27 09:54:27.620255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.790 [2024-11-27 09:54:27.620980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.790 [2024-11-27 09:54:27.621059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.790 [2024-11-27 09:54:27.621223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:26.790 [2024-11-27 09:54:27.621247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:26.790 [2024-11-27 09:54:27.621264] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:26.790 [2024-11-27 09:54:27.621302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:26.790 [2024-11-27 09:54:27.640166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:17:26.790 spare 00:17:26.790 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.790 09:54:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:26.790 [2024-11-27 09:54:27.649278] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.727 "name": "raid_bdev1", 00:17:27.727 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:27.727 "strip_size_kb": 64, 00:17:27.727 "state": "online", 00:17:27.727 "raid_level": "raid5f", 00:17:27.727 "superblock": true, 00:17:27.727 "num_base_bdevs": 3, 00:17:27.727 "num_base_bdevs_discovered": 3, 00:17:27.727 "num_base_bdevs_operational": 3, 00:17:27.727 "process": { 00:17:27.727 "type": "rebuild", 00:17:27.727 "target": "spare", 00:17:27.727 "progress": { 00:17:27.727 "blocks": 18432, 00:17:27.727 "percent": 14 00:17:27.727 } 00:17:27.727 }, 00:17:27.727 "base_bdevs_list": [ 00:17:27.727 { 00:17:27.727 "name": "spare", 00:17:27.727 "uuid": "2ff1c499-c783-5c4d-b243-1604ec407d5d", 00:17:27.727 "is_configured": true, 00:17:27.727 "data_offset": 2048, 00:17:27.727 "data_size": 63488 00:17:27.727 }, 00:17:27.727 { 00:17:27.727 "name": "BaseBdev2", 00:17:27.727 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:27.727 "is_configured": true, 00:17:27.727 "data_offset": 2048, 00:17:27.727 "data_size": 63488 00:17:27.727 }, 00:17:27.727 { 00:17:27.727 "name": "BaseBdev3", 00:17:27.727 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:27.727 "is_configured": true, 00:17:27.727 "data_offset": 2048, 00:17:27.727 "data_size": 63488 00:17:27.727 } 00:17:27.727 ] 00:17:27.727 }' 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.727 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.727 [2024-11-27 09:54:28.809436] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.987 [2024-11-27 09:54:28.865651] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:27.987 [2024-11-27 09:54:28.865752] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.987 [2024-11-27 09:54:28.865781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:27.987 [2024-11-27 09:54:28.865793] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.987 "name": "raid_bdev1", 00:17:27.987 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:27.987 "strip_size_kb": 64, 00:17:27.987 "state": "online", 00:17:27.987 "raid_level": "raid5f", 00:17:27.987 "superblock": true, 00:17:27.987 "num_base_bdevs": 3, 00:17:27.987 "num_base_bdevs_discovered": 2, 00:17:27.987 "num_base_bdevs_operational": 2, 00:17:27.987 "base_bdevs_list": [ 00:17:27.987 { 00:17:27.987 "name": null, 00:17:27.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.987 "is_configured": false, 00:17:27.987 "data_offset": 0, 00:17:27.987 "data_size": 63488 00:17:27.987 }, 00:17:27.987 { 00:17:27.987 "name": "BaseBdev2", 00:17:27.987 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:27.987 "is_configured": true, 00:17:27.987 "data_offset": 2048, 00:17:27.987 "data_size": 63488 00:17:27.987 }, 00:17:27.987 { 00:17:27.987 "name": "BaseBdev3", 00:17:27.987 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:27.987 "is_configured": true, 00:17:27.987 "data_offset": 2048, 00:17:27.987 "data_size": 63488 00:17:27.987 } 00:17:27.987 ] 00:17:27.987 }' 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.987 09:54:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.247 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.506 "name": "raid_bdev1", 00:17:28.506 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:28.506 "strip_size_kb": 64, 00:17:28.506 "state": "online", 00:17:28.506 "raid_level": "raid5f", 00:17:28.506 "superblock": true, 00:17:28.506 "num_base_bdevs": 3, 00:17:28.506 "num_base_bdevs_discovered": 2, 00:17:28.506 "num_base_bdevs_operational": 2, 00:17:28.506 "base_bdevs_list": [ 00:17:28.506 { 00:17:28.506 "name": null, 00:17:28.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.506 "is_configured": false, 00:17:28.506 "data_offset": 0, 00:17:28.506 "data_size": 63488 00:17:28.506 }, 00:17:28.506 { 00:17:28.506 "name": "BaseBdev2", 00:17:28.506 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:28.506 "is_configured": true, 00:17:28.506 "data_offset": 2048, 00:17:28.506 "data_size": 63488 00:17:28.506 }, 00:17:28.506 { 00:17:28.506 "name": "BaseBdev3", 00:17:28.506 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:28.506 "is_configured": true, 00:17:28.506 "data_offset": 2048, 00:17:28.506 "data_size": 63488 00:17:28.506 } 00:17:28.506 ] 00:17:28.506 }' 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.506 [2024-11-27 09:54:29.527336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.506 [2024-11-27 09:54:29.527497] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.506 [2024-11-27 09:54:29.527540] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:28.506 [2024-11-27 09:54:29.527554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.506 [2024-11-27 09:54:29.528282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.506 [2024-11-27 09:54:29.528314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.506 [2024-11-27 09:54:29.528450] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:28.506 [2024-11-27 09:54:29.528480] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:28.506 [2024-11-27 09:54:29.528525] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:28.506 [2024-11-27 09:54:29.528542] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:28.506 BaseBdev1 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.506 09:54:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:29.444 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:29.444 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:29.444 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:29.444 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.445 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.704 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:29.704 "name": "raid_bdev1", 00:17:29.704 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:29.704 "strip_size_kb": 64, 00:17:29.704 "state": "online", 00:17:29.704 "raid_level": "raid5f", 00:17:29.704 "superblock": true, 00:17:29.704 "num_base_bdevs": 3, 00:17:29.704 "num_base_bdevs_discovered": 2, 00:17:29.704 "num_base_bdevs_operational": 2, 00:17:29.704 "base_bdevs_list": [ 00:17:29.704 { 00:17:29.704 "name": null, 00:17:29.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.704 "is_configured": false, 00:17:29.704 "data_offset": 0, 00:17:29.704 "data_size": 63488 00:17:29.704 }, 00:17:29.704 { 00:17:29.704 "name": "BaseBdev2", 00:17:29.704 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:29.704 "is_configured": true, 00:17:29.704 "data_offset": 2048, 00:17:29.704 "data_size": 63488 00:17:29.704 }, 00:17:29.704 { 00:17:29.704 "name": "BaseBdev3", 00:17:29.704 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:29.704 "is_configured": true, 00:17:29.704 "data_offset": 2048, 00:17:29.704 "data_size": 63488 00:17:29.704 } 00:17:29.704 ] 00:17:29.704 }' 00:17:29.704 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:29.704 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:29.964 09:54:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.964 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.964 "name": "raid_bdev1", 00:17:29.964 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:29.964 "strip_size_kb": 64, 00:17:29.964 "state": "online", 00:17:29.964 "raid_level": "raid5f", 00:17:29.964 "superblock": true, 00:17:29.964 "num_base_bdevs": 3, 00:17:29.964 "num_base_bdevs_discovered": 2, 00:17:29.964 "num_base_bdevs_operational": 2, 00:17:29.964 "base_bdevs_list": [ 00:17:29.964 { 00:17:29.964 "name": null, 00:17:29.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.964 "is_configured": false, 00:17:29.964 "data_offset": 0, 00:17:29.964 "data_size": 63488 00:17:29.964 }, 00:17:29.964 { 00:17:29.964 "name": "BaseBdev2", 00:17:29.964 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:29.964 "is_configured": true, 00:17:29.964 "data_offset": 2048, 00:17:29.964 "data_size": 63488 00:17:29.964 }, 00:17:29.964 { 00:17:29.964 "name": "BaseBdev3", 00:17:29.964 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:29.964 "is_configured": true, 00:17:29.964 "data_offset": 2048, 00:17:29.964 "data_size": 63488 00:17:29.964 } 00:17:29.964 ] 00:17:29.964 }' 00:17:29.964 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.964 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.964 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.224 [2024-11-27 09:54:31.128888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.224 [2024-11-27 09:54:31.129238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:30.224 [2024-11-27 09:54:31.129332] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:30.224 request: 00:17:30.224 { 00:17:30.224 "base_bdev": "BaseBdev1", 00:17:30.224 "raid_bdev": "raid_bdev1", 00:17:30.224 "method": "bdev_raid_add_base_bdev", 00:17:30.224 "req_id": 1 00:17:30.224 } 00:17:30.224 Got JSON-RPC error response 00:17:30.224 response: 00:17:30.224 { 00:17:30.224 "code": -22, 00:17:30.224 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:30.224 } 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:30.224 09:54:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.174 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.174 "name": "raid_bdev1", 00:17:31.174 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:31.174 "strip_size_kb": 64, 00:17:31.174 "state": "online", 00:17:31.174 "raid_level": "raid5f", 00:17:31.174 "superblock": true, 00:17:31.174 "num_base_bdevs": 3, 00:17:31.174 "num_base_bdevs_discovered": 2, 00:17:31.174 "num_base_bdevs_operational": 2, 00:17:31.174 "base_bdevs_list": [ 00:17:31.174 { 00:17:31.174 "name": null, 00:17:31.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.174 "is_configured": false, 00:17:31.174 "data_offset": 0, 00:17:31.174 "data_size": 63488 00:17:31.174 }, 00:17:31.174 { 00:17:31.174 "name": "BaseBdev2", 00:17:31.174 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:31.174 "is_configured": true, 00:17:31.174 "data_offset": 2048, 00:17:31.174 "data_size": 63488 00:17:31.174 }, 00:17:31.174 { 00:17:31.174 "name": "BaseBdev3", 00:17:31.175 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:31.175 "is_configured": true, 00:17:31.175 "data_offset": 2048, 00:17:31.175 "data_size": 63488 00:17:31.175 } 00:17:31.175 ] 00:17:31.175 }' 00:17:31.175 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.175 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.742 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.742 "name": "raid_bdev1", 00:17:31.742 "uuid": "55748f80-d37a-40e3-b671-0133e7e39f4d", 00:17:31.742 "strip_size_kb": 64, 00:17:31.742 "state": "online", 00:17:31.742 "raid_level": "raid5f", 00:17:31.742 "superblock": true, 00:17:31.742 "num_base_bdevs": 3, 00:17:31.742 "num_base_bdevs_discovered": 2, 00:17:31.742 "num_base_bdevs_operational": 2, 00:17:31.742 "base_bdevs_list": [ 00:17:31.742 { 00:17:31.742 "name": null, 00:17:31.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.743 "is_configured": false, 00:17:31.743 "data_offset": 0, 00:17:31.743 "data_size": 63488 00:17:31.743 }, 00:17:31.743 { 00:17:31.743 "name": "BaseBdev2", 00:17:31.743 "uuid": "9f2f6336-8738-5706-829a-78db75d0e6e4", 00:17:31.743 "is_configured": true, 00:17:31.743 "data_offset": 2048, 00:17:31.743 "data_size": 63488 00:17:31.743 }, 00:17:31.743 { 00:17:31.743 "name": "BaseBdev3", 00:17:31.743 "uuid": "a6f31190-dc5c-507d-9388-164564cc2427", 00:17:31.743 "is_configured": true, 00:17:31.743 "data_offset": 2048, 00:17:31.743 "data_size": 63488 00:17:31.743 } 00:17:31.743 ] 00:17:31.743 }' 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82331 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82331 ']' 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82331 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82331 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.743 killing process with pid 82331 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82331' 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82331 00:17:31.743 Received shutdown signal, test time was about 60.000000 seconds 00:17:31.743 00:17:31.743 Latency(us) 00:17:31.743 [2024-11-27T09:54:32.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.743 [2024-11-27T09:54:32.876Z] =================================================================================================================== 00:17:31.743 [2024-11-27T09:54:32.876Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.743 [2024-11-27 09:54:32.747144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.743 09:54:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82331 00:17:31.743 [2024-11-27 09:54:32.747392] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.743 [2024-11-27 09:54:32.747522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.743 [2024-11-27 09:54:32.747591] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:32.311 [2024-11-27 09:54:33.181216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.693 09:54:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:33.693 00:17:33.693 real 0m23.592s 00:17:33.693 user 0m29.757s 00:17:33.693 sys 0m3.144s 00:17:33.693 ************************************ 00:17:33.693 END TEST raid5f_rebuild_test_sb 00:17:33.693 ************************************ 00:17:33.693 09:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.693 09:54:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.693 09:54:34 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:17:33.693 09:54:34 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:17:33.693 09:54:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:33.693 09:54:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.693 09:54:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.693 ************************************ 00:17:33.693 START TEST raid5f_state_function_test 00:17:33.693 ************************************ 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:33.693 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:17:33.694 Process raid pid: 83086 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83086 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83086' 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83086 00:17:33.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83086 ']' 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.694 09:54:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.694 [2024-11-27 09:54:34.598824] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:17:33.694 [2024-11-27 09:54:34.598965] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.694 [2024-11-27 09:54:34.781636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.953 [2024-11-27 09:54:34.928369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.212 [2024-11-27 09:54:35.167912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.212 [2024-11-27 09:54:35.168161] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.472 [2024-11-27 09:54:35.468684] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:34.472 [2024-11-27 09:54:35.468889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:34.472 [2024-11-27 09:54:35.468949] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:34.472 [2024-11-27 09:54:35.468981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:34.472 [2024-11-27 09:54:35.469032] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:34.472 [2024-11-27 09:54:35.469105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:34.472 [2024-11-27 09:54:35.469116] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:34.472 [2024-11-27 09:54:35.469129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.472 "name": "Existed_Raid", 00:17:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.472 "strip_size_kb": 64, 00:17:34.472 "state": "configuring", 00:17:34.472 "raid_level": "raid5f", 00:17:34.472 "superblock": false, 00:17:34.472 "num_base_bdevs": 4, 00:17:34.472 "num_base_bdevs_discovered": 0, 00:17:34.472 "num_base_bdevs_operational": 4, 00:17:34.472 "base_bdevs_list": [ 00:17:34.472 { 00:17:34.472 "name": "BaseBdev1", 00:17:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.472 "is_configured": false, 00:17:34.472 "data_offset": 0, 00:17:34.472 "data_size": 0 00:17:34.472 }, 00:17:34.472 { 00:17:34.472 "name": "BaseBdev2", 00:17:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.472 "is_configured": false, 00:17:34.472 "data_offset": 0, 00:17:34.472 "data_size": 0 00:17:34.472 }, 00:17:34.472 { 00:17:34.472 "name": "BaseBdev3", 00:17:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.472 "is_configured": false, 00:17:34.472 "data_offset": 0, 00:17:34.472 "data_size": 0 00:17:34.472 }, 00:17:34.472 { 00:17:34.472 "name": "BaseBdev4", 00:17:34.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.472 "is_configured": false, 00:17:34.472 "data_offset": 0, 00:17:34.472 "data_size": 0 00:17:34.472 } 00:17:34.472 ] 00:17:34.472 }' 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.472 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.041 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.041 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.041 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.041 [2024-11-27 09:54:35.951783] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.041 [2024-11-27 09:54:35.951960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.042 [2024-11-27 09:54:35.963754] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:35.042 [2024-11-27 09:54:35.963905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:35.042 [2024-11-27 09:54:35.963941] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.042 [2024-11-27 09:54:35.963971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.042 [2024-11-27 09:54:35.964005] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.042 [2024-11-27 09:54:35.964065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.042 [2024-11-27 09:54:35.964101] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:35.042 [2024-11-27 09:54:35.964131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.042 09:54:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.042 [2024-11-27 09:54:36.019285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.042 BaseBdev1 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.042 [ 00:17:35.042 { 00:17:35.042 "name": "BaseBdev1", 00:17:35.042 "aliases": [ 00:17:35.042 "ffacf9f9-adc6-4794-b282-ddea2f0f26ad" 00:17:35.042 ], 00:17:35.042 "product_name": "Malloc disk", 00:17:35.042 "block_size": 512, 00:17:35.042 "num_blocks": 65536, 00:17:35.042 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:35.042 "assigned_rate_limits": { 00:17:35.042 "rw_ios_per_sec": 0, 00:17:35.042 "rw_mbytes_per_sec": 0, 00:17:35.042 "r_mbytes_per_sec": 0, 00:17:35.042 "w_mbytes_per_sec": 0 00:17:35.042 }, 00:17:35.042 "claimed": true, 00:17:35.042 "claim_type": "exclusive_write", 00:17:35.042 "zoned": false, 00:17:35.042 "supported_io_types": { 00:17:35.042 "read": true, 00:17:35.042 "write": true, 00:17:35.042 "unmap": true, 00:17:35.042 "flush": true, 00:17:35.042 "reset": true, 00:17:35.042 "nvme_admin": false, 00:17:35.042 "nvme_io": false, 00:17:35.042 "nvme_io_md": false, 00:17:35.042 "write_zeroes": true, 00:17:35.042 "zcopy": true, 00:17:35.042 "get_zone_info": false, 00:17:35.042 "zone_management": false, 00:17:35.042 "zone_append": false, 00:17:35.042 "compare": false, 00:17:35.042 "compare_and_write": false, 00:17:35.042 "abort": true, 00:17:35.042 "seek_hole": false, 00:17:35.042 "seek_data": false, 00:17:35.042 "copy": true, 00:17:35.042 "nvme_iov_md": false 00:17:35.042 }, 00:17:35.042 "memory_domains": [ 00:17:35.042 { 00:17:35.042 "dma_device_id": "system", 00:17:35.042 "dma_device_type": 1 00:17:35.042 }, 00:17:35.042 { 00:17:35.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.042 "dma_device_type": 2 00:17:35.042 } 00:17:35.042 ], 00:17:35.042 "driver_specific": {} 00:17:35.042 } 00:17:35.042 ] 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.042 "name": "Existed_Raid", 00:17:35.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.042 "strip_size_kb": 64, 00:17:35.042 "state": "configuring", 00:17:35.042 "raid_level": "raid5f", 00:17:35.042 "superblock": false, 00:17:35.042 "num_base_bdevs": 4, 00:17:35.042 "num_base_bdevs_discovered": 1, 00:17:35.042 "num_base_bdevs_operational": 4, 00:17:35.042 "base_bdevs_list": [ 00:17:35.042 { 00:17:35.042 "name": "BaseBdev1", 00:17:35.042 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:35.042 "is_configured": true, 00:17:35.042 "data_offset": 0, 00:17:35.042 "data_size": 65536 00:17:35.042 }, 00:17:35.042 { 00:17:35.042 "name": "BaseBdev2", 00:17:35.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.042 "is_configured": false, 00:17:35.042 "data_offset": 0, 00:17:35.042 "data_size": 0 00:17:35.042 }, 00:17:35.042 { 00:17:35.042 "name": "BaseBdev3", 00:17:35.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.042 "is_configured": false, 00:17:35.042 "data_offset": 0, 00:17:35.042 "data_size": 0 00:17:35.042 }, 00:17:35.042 { 00:17:35.042 "name": "BaseBdev4", 00:17:35.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.042 "is_configured": false, 00:17:35.042 "data_offset": 0, 00:17:35.042 "data_size": 0 00:17:35.042 } 00:17:35.042 ] 00:17:35.042 }' 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.042 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 [2024-11-27 09:54:36.474611] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.612 [2024-11-27 09:54:36.474705] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 [2024-11-27 09:54:36.486682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.612 [2024-11-27 09:54:36.489023] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.612 [2024-11-27 09:54:36.489086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.612 [2024-11-27 09:54:36.489099] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.612 [2024-11-27 09:54:36.489113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.612 [2024-11-27 09:54:36.489122] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:35.612 [2024-11-27 09:54:36.489134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.612 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.612 "name": "Existed_Raid", 00:17:35.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.612 "strip_size_kb": 64, 00:17:35.612 "state": "configuring", 00:17:35.612 "raid_level": "raid5f", 00:17:35.612 "superblock": false, 00:17:35.612 "num_base_bdevs": 4, 00:17:35.612 "num_base_bdevs_discovered": 1, 00:17:35.612 "num_base_bdevs_operational": 4, 00:17:35.612 "base_bdevs_list": [ 00:17:35.612 { 00:17:35.612 "name": "BaseBdev1", 00:17:35.612 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:35.612 "is_configured": true, 00:17:35.612 "data_offset": 0, 00:17:35.612 "data_size": 65536 00:17:35.612 }, 00:17:35.612 { 00:17:35.612 "name": "BaseBdev2", 00:17:35.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.612 "is_configured": false, 00:17:35.612 "data_offset": 0, 00:17:35.612 "data_size": 0 00:17:35.613 }, 00:17:35.613 { 00:17:35.613 "name": "BaseBdev3", 00:17:35.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.613 "is_configured": false, 00:17:35.613 "data_offset": 0, 00:17:35.613 "data_size": 0 00:17:35.613 }, 00:17:35.613 { 00:17:35.613 "name": "BaseBdev4", 00:17:35.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.613 "is_configured": false, 00:17:35.613 "data_offset": 0, 00:17:35.613 "data_size": 0 00:17:35.613 } 00:17:35.613 ] 00:17:35.613 }' 00:17:35.613 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.613 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.873 [2024-11-27 09:54:36.963220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.873 BaseBdev2 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.873 [ 00:17:35.873 { 00:17:35.873 "name": "BaseBdev2", 00:17:35.873 "aliases": [ 00:17:35.873 "bd6183e7-8035-4ad3-81cd-592b95edaefb" 00:17:35.873 ], 00:17:35.873 "product_name": "Malloc disk", 00:17:35.873 "block_size": 512, 00:17:35.873 "num_blocks": 65536, 00:17:35.873 "uuid": "bd6183e7-8035-4ad3-81cd-592b95edaefb", 00:17:35.873 "assigned_rate_limits": { 00:17:35.873 "rw_ios_per_sec": 0, 00:17:35.873 "rw_mbytes_per_sec": 0, 00:17:35.873 "r_mbytes_per_sec": 0, 00:17:35.873 "w_mbytes_per_sec": 0 00:17:35.873 }, 00:17:35.873 "claimed": true, 00:17:35.873 "claim_type": "exclusive_write", 00:17:35.873 "zoned": false, 00:17:35.873 "supported_io_types": { 00:17:35.873 "read": true, 00:17:35.873 "write": true, 00:17:35.873 "unmap": true, 00:17:35.873 "flush": true, 00:17:35.873 "reset": true, 00:17:35.873 "nvme_admin": false, 00:17:35.873 "nvme_io": false, 00:17:35.873 "nvme_io_md": false, 00:17:35.873 "write_zeroes": true, 00:17:35.873 "zcopy": true, 00:17:35.873 "get_zone_info": false, 00:17:35.873 "zone_management": false, 00:17:35.873 "zone_append": false, 00:17:35.873 "compare": false, 00:17:35.873 "compare_and_write": false, 00:17:35.873 "abort": true, 00:17:35.873 "seek_hole": false, 00:17:35.873 "seek_data": false, 00:17:35.873 "copy": true, 00:17:35.873 "nvme_iov_md": false 00:17:35.873 }, 00:17:35.873 "memory_domains": [ 00:17:35.873 { 00:17:35.873 "dma_device_id": "system", 00:17:35.873 "dma_device_type": 1 00:17:35.873 }, 00:17:35.873 { 00:17:35.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.873 "dma_device_type": 2 00:17:35.873 } 00:17:35.873 ], 00:17:35.873 "driver_specific": {} 00:17:35.873 } 00:17:35.873 ] 00:17:35.873 09:54:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:35.873 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.132 "name": "Existed_Raid", 00:17:36.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.132 "strip_size_kb": 64, 00:17:36.132 "state": "configuring", 00:17:36.132 "raid_level": "raid5f", 00:17:36.132 "superblock": false, 00:17:36.132 "num_base_bdevs": 4, 00:17:36.132 "num_base_bdevs_discovered": 2, 00:17:36.132 "num_base_bdevs_operational": 4, 00:17:36.132 "base_bdevs_list": [ 00:17:36.132 { 00:17:36.132 "name": "BaseBdev1", 00:17:36.132 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:36.132 "is_configured": true, 00:17:36.132 "data_offset": 0, 00:17:36.132 "data_size": 65536 00:17:36.132 }, 00:17:36.132 { 00:17:36.132 "name": "BaseBdev2", 00:17:36.132 "uuid": "bd6183e7-8035-4ad3-81cd-592b95edaefb", 00:17:36.132 "is_configured": true, 00:17:36.132 "data_offset": 0, 00:17:36.132 "data_size": 65536 00:17:36.132 }, 00:17:36.132 { 00:17:36.132 "name": "BaseBdev3", 00:17:36.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.132 "is_configured": false, 00:17:36.132 "data_offset": 0, 00:17:36.132 "data_size": 0 00:17:36.132 }, 00:17:36.132 { 00:17:36.132 "name": "BaseBdev4", 00:17:36.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.132 "is_configured": false, 00:17:36.132 "data_offset": 0, 00:17:36.132 "data_size": 0 00:17:36.132 } 00:17:36.132 ] 00:17:36.132 }' 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.132 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.392 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.393 [2024-11-27 09:54:37.505136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:36.393 BaseBdev3 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.393 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.652 [ 00:17:36.652 { 00:17:36.652 "name": "BaseBdev3", 00:17:36.652 "aliases": [ 00:17:36.652 "906aec3f-a687-4c88-b0d6-8607a16821be" 00:17:36.652 ], 00:17:36.652 "product_name": "Malloc disk", 00:17:36.652 "block_size": 512, 00:17:36.652 "num_blocks": 65536, 00:17:36.652 "uuid": "906aec3f-a687-4c88-b0d6-8607a16821be", 00:17:36.652 "assigned_rate_limits": { 00:17:36.652 "rw_ios_per_sec": 0, 00:17:36.652 "rw_mbytes_per_sec": 0, 00:17:36.652 "r_mbytes_per_sec": 0, 00:17:36.652 "w_mbytes_per_sec": 0 00:17:36.652 }, 00:17:36.652 "claimed": true, 00:17:36.652 "claim_type": "exclusive_write", 00:17:36.652 "zoned": false, 00:17:36.652 "supported_io_types": { 00:17:36.652 "read": true, 00:17:36.652 "write": true, 00:17:36.652 "unmap": true, 00:17:36.652 "flush": true, 00:17:36.652 "reset": true, 00:17:36.652 "nvme_admin": false, 00:17:36.652 "nvme_io": false, 00:17:36.652 "nvme_io_md": false, 00:17:36.652 "write_zeroes": true, 00:17:36.652 "zcopy": true, 00:17:36.652 "get_zone_info": false, 00:17:36.652 "zone_management": false, 00:17:36.652 "zone_append": false, 00:17:36.652 "compare": false, 00:17:36.652 "compare_and_write": false, 00:17:36.652 "abort": true, 00:17:36.652 "seek_hole": false, 00:17:36.652 "seek_data": false, 00:17:36.653 "copy": true, 00:17:36.653 "nvme_iov_md": false 00:17:36.653 }, 00:17:36.653 "memory_domains": [ 00:17:36.653 { 00:17:36.653 "dma_device_id": "system", 00:17:36.653 "dma_device_type": 1 00:17:36.653 }, 00:17:36.653 { 00:17:36.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.653 "dma_device_type": 2 00:17:36.653 } 00:17:36.653 ], 00:17:36.653 "driver_specific": {} 00:17:36.653 } 00:17:36.653 ] 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.653 "name": "Existed_Raid", 00:17:36.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.653 "strip_size_kb": 64, 00:17:36.653 "state": "configuring", 00:17:36.653 "raid_level": "raid5f", 00:17:36.653 "superblock": false, 00:17:36.653 "num_base_bdevs": 4, 00:17:36.653 "num_base_bdevs_discovered": 3, 00:17:36.653 "num_base_bdevs_operational": 4, 00:17:36.653 "base_bdevs_list": [ 00:17:36.653 { 00:17:36.653 "name": "BaseBdev1", 00:17:36.653 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:36.653 "is_configured": true, 00:17:36.653 "data_offset": 0, 00:17:36.653 "data_size": 65536 00:17:36.653 }, 00:17:36.653 { 00:17:36.653 "name": "BaseBdev2", 00:17:36.653 "uuid": "bd6183e7-8035-4ad3-81cd-592b95edaefb", 00:17:36.653 "is_configured": true, 00:17:36.653 "data_offset": 0, 00:17:36.653 "data_size": 65536 00:17:36.653 }, 00:17:36.653 { 00:17:36.653 "name": "BaseBdev3", 00:17:36.653 "uuid": "906aec3f-a687-4c88-b0d6-8607a16821be", 00:17:36.653 "is_configured": true, 00:17:36.653 "data_offset": 0, 00:17:36.653 "data_size": 65536 00:17:36.653 }, 00:17:36.653 { 00:17:36.653 "name": "BaseBdev4", 00:17:36.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.653 "is_configured": false, 00:17:36.653 "data_offset": 0, 00:17:36.653 "data_size": 0 00:17:36.653 } 00:17:36.653 ] 00:17:36.653 }' 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.653 09:54:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.913 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:36.913 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.913 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.174 [2024-11-27 09:54:38.053545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:37.174 [2024-11-27 09:54:38.053648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.174 [2024-11-27 09:54:38.053661] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:37.174 [2024-11-27 09:54:38.053981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:37.174 [2024-11-27 09:54:38.061860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.174 [2024-11-27 09:54:38.061902] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:37.174 [2024-11-27 09:54:38.062353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.174 BaseBdev4 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.174 [ 00:17:37.174 { 00:17:37.174 "name": "BaseBdev4", 00:17:37.174 "aliases": [ 00:17:37.174 "9a12ac5b-bfc3-4799-9a17-4adc77b933dc" 00:17:37.174 ], 00:17:37.174 "product_name": "Malloc disk", 00:17:37.174 "block_size": 512, 00:17:37.174 "num_blocks": 65536, 00:17:37.174 "uuid": "9a12ac5b-bfc3-4799-9a17-4adc77b933dc", 00:17:37.174 "assigned_rate_limits": { 00:17:37.174 "rw_ios_per_sec": 0, 00:17:37.174 "rw_mbytes_per_sec": 0, 00:17:37.174 "r_mbytes_per_sec": 0, 00:17:37.174 "w_mbytes_per_sec": 0 00:17:37.174 }, 00:17:37.174 "claimed": true, 00:17:37.174 "claim_type": "exclusive_write", 00:17:37.174 "zoned": false, 00:17:37.174 "supported_io_types": { 00:17:37.174 "read": true, 00:17:37.174 "write": true, 00:17:37.174 "unmap": true, 00:17:37.174 "flush": true, 00:17:37.174 "reset": true, 00:17:37.174 "nvme_admin": false, 00:17:37.174 "nvme_io": false, 00:17:37.174 "nvme_io_md": false, 00:17:37.174 "write_zeroes": true, 00:17:37.174 "zcopy": true, 00:17:37.174 "get_zone_info": false, 00:17:37.174 "zone_management": false, 00:17:37.174 "zone_append": false, 00:17:37.174 "compare": false, 00:17:37.174 "compare_and_write": false, 00:17:37.174 "abort": true, 00:17:37.174 "seek_hole": false, 00:17:37.174 "seek_data": false, 00:17:37.174 "copy": true, 00:17:37.174 "nvme_iov_md": false 00:17:37.174 }, 00:17:37.174 "memory_domains": [ 00:17:37.174 { 00:17:37.174 "dma_device_id": "system", 00:17:37.174 "dma_device_type": 1 00:17:37.174 }, 00:17:37.174 { 00:17:37.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.174 "dma_device_type": 2 00:17:37.174 } 00:17:37.174 ], 00:17:37.174 "driver_specific": {} 00:17:37.174 } 00:17:37.174 ] 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.174 "name": "Existed_Raid", 00:17:37.174 "uuid": "60111d6a-9248-45d0-90a4-8d44cbb69f20", 00:17:37.174 "strip_size_kb": 64, 00:17:37.174 "state": "online", 00:17:37.174 "raid_level": "raid5f", 00:17:37.174 "superblock": false, 00:17:37.174 "num_base_bdevs": 4, 00:17:37.174 "num_base_bdevs_discovered": 4, 00:17:37.174 "num_base_bdevs_operational": 4, 00:17:37.174 "base_bdevs_list": [ 00:17:37.174 { 00:17:37.174 "name": "BaseBdev1", 00:17:37.174 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:37.174 "is_configured": true, 00:17:37.174 "data_offset": 0, 00:17:37.174 "data_size": 65536 00:17:37.174 }, 00:17:37.174 { 00:17:37.174 "name": "BaseBdev2", 00:17:37.174 "uuid": "bd6183e7-8035-4ad3-81cd-592b95edaefb", 00:17:37.174 "is_configured": true, 00:17:37.174 "data_offset": 0, 00:17:37.174 "data_size": 65536 00:17:37.174 }, 00:17:37.174 { 00:17:37.174 "name": "BaseBdev3", 00:17:37.174 "uuid": "906aec3f-a687-4c88-b0d6-8607a16821be", 00:17:37.174 "is_configured": true, 00:17:37.174 "data_offset": 0, 00:17:37.174 "data_size": 65536 00:17:37.174 }, 00:17:37.174 { 00:17:37.174 "name": "BaseBdev4", 00:17:37.174 "uuid": "9a12ac5b-bfc3-4799-9a17-4adc77b933dc", 00:17:37.174 "is_configured": true, 00:17:37.174 "data_offset": 0, 00:17:37.174 "data_size": 65536 00:17:37.174 } 00:17:37.174 ] 00:17:37.174 }' 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.174 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:37.744 [2024-11-27 09:54:38.583711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:37.744 "name": "Existed_Raid", 00:17:37.744 "aliases": [ 00:17:37.744 "60111d6a-9248-45d0-90a4-8d44cbb69f20" 00:17:37.744 ], 00:17:37.744 "product_name": "Raid Volume", 00:17:37.744 "block_size": 512, 00:17:37.744 "num_blocks": 196608, 00:17:37.744 "uuid": "60111d6a-9248-45d0-90a4-8d44cbb69f20", 00:17:37.744 "assigned_rate_limits": { 00:17:37.744 "rw_ios_per_sec": 0, 00:17:37.744 "rw_mbytes_per_sec": 0, 00:17:37.744 "r_mbytes_per_sec": 0, 00:17:37.744 "w_mbytes_per_sec": 0 00:17:37.744 }, 00:17:37.744 "claimed": false, 00:17:37.744 "zoned": false, 00:17:37.744 "supported_io_types": { 00:17:37.744 "read": true, 00:17:37.744 "write": true, 00:17:37.744 "unmap": false, 00:17:37.744 "flush": false, 00:17:37.744 "reset": true, 00:17:37.744 "nvme_admin": false, 00:17:37.744 "nvme_io": false, 00:17:37.744 "nvme_io_md": false, 00:17:37.744 "write_zeroes": true, 00:17:37.744 "zcopy": false, 00:17:37.744 "get_zone_info": false, 00:17:37.744 "zone_management": false, 00:17:37.744 "zone_append": false, 00:17:37.744 "compare": false, 00:17:37.744 "compare_and_write": false, 00:17:37.744 "abort": false, 00:17:37.744 "seek_hole": false, 00:17:37.744 "seek_data": false, 00:17:37.744 "copy": false, 00:17:37.744 "nvme_iov_md": false 00:17:37.744 }, 00:17:37.744 "driver_specific": { 00:17:37.744 "raid": { 00:17:37.744 "uuid": "60111d6a-9248-45d0-90a4-8d44cbb69f20", 00:17:37.744 "strip_size_kb": 64, 00:17:37.744 "state": "online", 00:17:37.744 "raid_level": "raid5f", 00:17:37.744 "superblock": false, 00:17:37.744 "num_base_bdevs": 4, 00:17:37.744 "num_base_bdevs_discovered": 4, 00:17:37.744 "num_base_bdevs_operational": 4, 00:17:37.744 "base_bdevs_list": [ 00:17:37.744 { 00:17:37.744 "name": "BaseBdev1", 00:17:37.744 "uuid": "ffacf9f9-adc6-4794-b282-ddea2f0f26ad", 00:17:37.744 "is_configured": true, 00:17:37.744 "data_offset": 0, 00:17:37.744 "data_size": 65536 00:17:37.744 }, 00:17:37.744 { 00:17:37.744 "name": "BaseBdev2", 00:17:37.744 "uuid": "bd6183e7-8035-4ad3-81cd-592b95edaefb", 00:17:37.744 "is_configured": true, 00:17:37.744 "data_offset": 0, 00:17:37.744 "data_size": 65536 00:17:37.744 }, 00:17:37.744 { 00:17:37.744 "name": "BaseBdev3", 00:17:37.744 "uuid": "906aec3f-a687-4c88-b0d6-8607a16821be", 00:17:37.744 "is_configured": true, 00:17:37.744 "data_offset": 0, 00:17:37.744 "data_size": 65536 00:17:37.744 }, 00:17:37.744 { 00:17:37.744 "name": "BaseBdev4", 00:17:37.744 "uuid": "9a12ac5b-bfc3-4799-9a17-4adc77b933dc", 00:17:37.744 "is_configured": true, 00:17:37.744 "data_offset": 0, 00:17:37.744 "data_size": 65536 00:17:37.744 } 00:17:37.744 ] 00:17:37.744 } 00:17:37.744 } 00:17:37.744 }' 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:37.744 BaseBdev2 00:17:37.744 BaseBdev3 00:17:37.744 BaseBdev4' 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.744 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.745 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.088 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.088 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:38.088 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:38.088 09:54:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:38.088 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.088 09:54:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.088 [2024-11-27 09:54:38.903047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.088 "name": "Existed_Raid", 00:17:38.088 "uuid": "60111d6a-9248-45d0-90a4-8d44cbb69f20", 00:17:38.088 "strip_size_kb": 64, 00:17:38.088 "state": "online", 00:17:38.088 "raid_level": "raid5f", 00:17:38.088 "superblock": false, 00:17:38.088 "num_base_bdevs": 4, 00:17:38.088 "num_base_bdevs_discovered": 3, 00:17:38.088 "num_base_bdevs_operational": 3, 00:17:38.088 "base_bdevs_list": [ 00:17:38.088 { 00:17:38.088 "name": null, 00:17:38.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.088 "is_configured": false, 00:17:38.088 "data_offset": 0, 00:17:38.088 "data_size": 65536 00:17:38.088 }, 00:17:38.088 { 00:17:38.088 "name": "BaseBdev2", 00:17:38.088 "uuid": "bd6183e7-8035-4ad3-81cd-592b95edaefb", 00:17:38.088 "is_configured": true, 00:17:38.088 "data_offset": 0, 00:17:38.088 "data_size": 65536 00:17:38.088 }, 00:17:38.088 { 00:17:38.088 "name": "BaseBdev3", 00:17:38.088 "uuid": "906aec3f-a687-4c88-b0d6-8607a16821be", 00:17:38.088 "is_configured": true, 00:17:38.088 "data_offset": 0, 00:17:38.088 "data_size": 65536 00:17:38.088 }, 00:17:38.088 { 00:17:38.088 "name": "BaseBdev4", 00:17:38.088 "uuid": "9a12ac5b-bfc3-4799-9a17-4adc77b933dc", 00:17:38.088 "is_configured": true, 00:17:38.088 "data_offset": 0, 00:17:38.088 "data_size": 65536 00:17:38.088 } 00:17:38.088 ] 00:17:38.088 }' 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.088 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.660 [2024-11-27 09:54:39.539240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.660 [2024-11-27 09:54:39.539390] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.660 [2024-11-27 09:54:39.644150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.660 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.660 [2024-11-27 09:54:39.704041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.919 [2024-11-27 09:54:39.870211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:38.919 [2024-11-27 09:54:39.870298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.919 09:54:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.919 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:38.919 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:38.919 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:38.919 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:38.920 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:38.920 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:38.920 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.920 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 BaseBdev2 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 [ 00:17:39.179 { 00:17:39.179 "name": "BaseBdev2", 00:17:39.179 "aliases": [ 00:17:39.179 "808589b9-1567-499c-b103-07a985cc88fe" 00:17:39.179 ], 00:17:39.179 "product_name": "Malloc disk", 00:17:39.179 "block_size": 512, 00:17:39.179 "num_blocks": 65536, 00:17:39.179 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:39.179 "assigned_rate_limits": { 00:17:39.179 "rw_ios_per_sec": 0, 00:17:39.179 "rw_mbytes_per_sec": 0, 00:17:39.179 "r_mbytes_per_sec": 0, 00:17:39.179 "w_mbytes_per_sec": 0 00:17:39.179 }, 00:17:39.179 "claimed": false, 00:17:39.179 "zoned": false, 00:17:39.179 "supported_io_types": { 00:17:39.179 "read": true, 00:17:39.179 "write": true, 00:17:39.179 "unmap": true, 00:17:39.179 "flush": true, 00:17:39.179 "reset": true, 00:17:39.179 "nvme_admin": false, 00:17:39.179 "nvme_io": false, 00:17:39.179 "nvme_io_md": false, 00:17:39.179 "write_zeroes": true, 00:17:39.179 "zcopy": true, 00:17:39.179 "get_zone_info": false, 00:17:39.179 "zone_management": false, 00:17:39.179 "zone_append": false, 00:17:39.179 "compare": false, 00:17:39.179 "compare_and_write": false, 00:17:39.179 "abort": true, 00:17:39.179 "seek_hole": false, 00:17:39.179 "seek_data": false, 00:17:39.179 "copy": true, 00:17:39.179 "nvme_iov_md": false 00:17:39.179 }, 00:17:39.179 "memory_domains": [ 00:17:39.179 { 00:17:39.179 "dma_device_id": "system", 00:17:39.179 "dma_device_type": 1 00:17:39.179 }, 00:17:39.179 { 00:17:39.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.179 "dma_device_type": 2 00:17:39.179 } 00:17:39.179 ], 00:17:39.179 "driver_specific": {} 00:17:39.179 } 00:17:39.179 ] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 BaseBdev3 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.179 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.179 [ 00:17:39.179 { 00:17:39.179 "name": "BaseBdev3", 00:17:39.179 "aliases": [ 00:17:39.179 "032727bc-7124-4880-a413-d3a9b0b94d56" 00:17:39.179 ], 00:17:39.179 "product_name": "Malloc disk", 00:17:39.179 "block_size": 512, 00:17:39.179 "num_blocks": 65536, 00:17:39.179 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:39.179 "assigned_rate_limits": { 00:17:39.179 "rw_ios_per_sec": 0, 00:17:39.179 "rw_mbytes_per_sec": 0, 00:17:39.179 "r_mbytes_per_sec": 0, 00:17:39.179 "w_mbytes_per_sec": 0 00:17:39.179 }, 00:17:39.179 "claimed": false, 00:17:39.179 "zoned": false, 00:17:39.179 "supported_io_types": { 00:17:39.179 "read": true, 00:17:39.179 "write": true, 00:17:39.179 "unmap": true, 00:17:39.179 "flush": true, 00:17:39.179 "reset": true, 00:17:39.179 "nvme_admin": false, 00:17:39.179 "nvme_io": false, 00:17:39.180 "nvme_io_md": false, 00:17:39.180 "write_zeroes": true, 00:17:39.180 "zcopy": true, 00:17:39.180 "get_zone_info": false, 00:17:39.180 "zone_management": false, 00:17:39.180 "zone_append": false, 00:17:39.180 "compare": false, 00:17:39.180 "compare_and_write": false, 00:17:39.180 "abort": true, 00:17:39.180 "seek_hole": false, 00:17:39.180 "seek_data": false, 00:17:39.180 "copy": true, 00:17:39.180 "nvme_iov_md": false 00:17:39.180 }, 00:17:39.180 "memory_domains": [ 00:17:39.180 { 00:17:39.180 "dma_device_id": "system", 00:17:39.180 "dma_device_type": 1 00:17:39.180 }, 00:17:39.180 { 00:17:39.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.180 "dma_device_type": 2 00:17:39.180 } 00:17:39.180 ], 00:17:39.180 "driver_specific": {} 00:17:39.180 } 00:17:39.180 ] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.180 BaseBdev4 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.180 [ 00:17:39.180 { 00:17:39.180 "name": "BaseBdev4", 00:17:39.180 "aliases": [ 00:17:39.180 "7363b540-393d-4f3f-88ff-4b8b2cfed23d" 00:17:39.180 ], 00:17:39.180 "product_name": "Malloc disk", 00:17:39.180 "block_size": 512, 00:17:39.180 "num_blocks": 65536, 00:17:39.180 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:39.180 "assigned_rate_limits": { 00:17:39.180 "rw_ios_per_sec": 0, 00:17:39.180 "rw_mbytes_per_sec": 0, 00:17:39.180 "r_mbytes_per_sec": 0, 00:17:39.180 "w_mbytes_per_sec": 0 00:17:39.180 }, 00:17:39.180 "claimed": false, 00:17:39.180 "zoned": false, 00:17:39.180 "supported_io_types": { 00:17:39.180 "read": true, 00:17:39.180 "write": true, 00:17:39.180 "unmap": true, 00:17:39.180 "flush": true, 00:17:39.180 "reset": true, 00:17:39.180 "nvme_admin": false, 00:17:39.180 "nvme_io": false, 00:17:39.180 "nvme_io_md": false, 00:17:39.180 "write_zeroes": true, 00:17:39.180 "zcopy": true, 00:17:39.180 "get_zone_info": false, 00:17:39.180 "zone_management": false, 00:17:39.180 "zone_append": false, 00:17:39.180 "compare": false, 00:17:39.180 "compare_and_write": false, 00:17:39.180 "abort": true, 00:17:39.180 "seek_hole": false, 00:17:39.180 "seek_data": false, 00:17:39.180 "copy": true, 00:17:39.180 "nvme_iov_md": false 00:17:39.180 }, 00:17:39.180 "memory_domains": [ 00:17:39.180 { 00:17:39.180 "dma_device_id": "system", 00:17:39.180 "dma_device_type": 1 00:17:39.180 }, 00:17:39.180 { 00:17:39.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.180 "dma_device_type": 2 00:17:39.180 } 00:17:39.180 ], 00:17:39.180 "driver_specific": {} 00:17:39.180 } 00:17:39.180 ] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.180 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.180 [2024-11-27 09:54:40.309800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.180 [2024-11-27 09:54:40.309882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.180 [2024-11-27 09:54:40.309922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.439 [2024-11-27 09:54:40.312289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:39.439 [2024-11-27 09:54:40.312358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.439 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.440 "name": "Existed_Raid", 00:17:39.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.440 "strip_size_kb": 64, 00:17:39.440 "state": "configuring", 00:17:39.440 "raid_level": "raid5f", 00:17:39.440 "superblock": false, 00:17:39.440 "num_base_bdevs": 4, 00:17:39.440 "num_base_bdevs_discovered": 3, 00:17:39.440 "num_base_bdevs_operational": 4, 00:17:39.440 "base_bdevs_list": [ 00:17:39.440 { 00:17:39.440 "name": "BaseBdev1", 00:17:39.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.440 "is_configured": false, 00:17:39.440 "data_offset": 0, 00:17:39.440 "data_size": 0 00:17:39.440 }, 00:17:39.440 { 00:17:39.440 "name": "BaseBdev2", 00:17:39.440 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:39.440 "is_configured": true, 00:17:39.440 "data_offset": 0, 00:17:39.440 "data_size": 65536 00:17:39.440 }, 00:17:39.440 { 00:17:39.440 "name": "BaseBdev3", 00:17:39.440 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:39.440 "is_configured": true, 00:17:39.440 "data_offset": 0, 00:17:39.440 "data_size": 65536 00:17:39.440 }, 00:17:39.440 { 00:17:39.440 "name": "BaseBdev4", 00:17:39.440 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:39.440 "is_configured": true, 00:17:39.440 "data_offset": 0, 00:17:39.440 "data_size": 65536 00:17:39.440 } 00:17:39.440 ] 00:17:39.440 }' 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.440 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.699 [2024-11-27 09:54:40.729154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.699 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.699 "name": "Existed_Raid", 00:17:39.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.699 "strip_size_kb": 64, 00:17:39.699 "state": "configuring", 00:17:39.699 "raid_level": "raid5f", 00:17:39.699 "superblock": false, 00:17:39.699 "num_base_bdevs": 4, 00:17:39.699 "num_base_bdevs_discovered": 2, 00:17:39.699 "num_base_bdevs_operational": 4, 00:17:39.699 "base_bdevs_list": [ 00:17:39.699 { 00:17:39.699 "name": "BaseBdev1", 00:17:39.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.700 "is_configured": false, 00:17:39.700 "data_offset": 0, 00:17:39.700 "data_size": 0 00:17:39.700 }, 00:17:39.700 { 00:17:39.700 "name": null, 00:17:39.700 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:39.700 "is_configured": false, 00:17:39.700 "data_offset": 0, 00:17:39.700 "data_size": 65536 00:17:39.700 }, 00:17:39.700 { 00:17:39.700 "name": "BaseBdev3", 00:17:39.700 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:39.700 "is_configured": true, 00:17:39.700 "data_offset": 0, 00:17:39.700 "data_size": 65536 00:17:39.700 }, 00:17:39.700 { 00:17:39.700 "name": "BaseBdev4", 00:17:39.700 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:39.700 "is_configured": true, 00:17:39.700 "data_offset": 0, 00:17:39.700 "data_size": 65536 00:17:39.700 } 00:17:39.700 ] 00:17:39.700 }' 00:17:39.700 09:54:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.700 09:54:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.268 [2024-11-27 09:54:41.300041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.268 BaseBdev1 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.268 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.268 [ 00:17:40.268 { 00:17:40.268 "name": "BaseBdev1", 00:17:40.268 "aliases": [ 00:17:40.268 "f527053e-3a0c-417e-a335-244555ae9fc4" 00:17:40.268 ], 00:17:40.268 "product_name": "Malloc disk", 00:17:40.268 "block_size": 512, 00:17:40.268 "num_blocks": 65536, 00:17:40.268 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:40.268 "assigned_rate_limits": { 00:17:40.268 "rw_ios_per_sec": 0, 00:17:40.268 "rw_mbytes_per_sec": 0, 00:17:40.268 "r_mbytes_per_sec": 0, 00:17:40.268 "w_mbytes_per_sec": 0 00:17:40.268 }, 00:17:40.268 "claimed": true, 00:17:40.268 "claim_type": "exclusive_write", 00:17:40.268 "zoned": false, 00:17:40.268 "supported_io_types": { 00:17:40.268 "read": true, 00:17:40.269 "write": true, 00:17:40.269 "unmap": true, 00:17:40.269 "flush": true, 00:17:40.269 "reset": true, 00:17:40.269 "nvme_admin": false, 00:17:40.269 "nvme_io": false, 00:17:40.269 "nvme_io_md": false, 00:17:40.269 "write_zeroes": true, 00:17:40.269 "zcopy": true, 00:17:40.269 "get_zone_info": false, 00:17:40.269 "zone_management": false, 00:17:40.269 "zone_append": false, 00:17:40.269 "compare": false, 00:17:40.269 "compare_and_write": false, 00:17:40.269 "abort": true, 00:17:40.269 "seek_hole": false, 00:17:40.269 "seek_data": false, 00:17:40.269 "copy": true, 00:17:40.269 "nvme_iov_md": false 00:17:40.269 }, 00:17:40.269 "memory_domains": [ 00:17:40.269 { 00:17:40.269 "dma_device_id": "system", 00:17:40.269 "dma_device_type": 1 00:17:40.269 }, 00:17:40.269 { 00:17:40.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.269 "dma_device_type": 2 00:17:40.269 } 00:17:40.269 ], 00:17:40.269 "driver_specific": {} 00:17:40.269 } 00:17:40.269 ] 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.269 "name": "Existed_Raid", 00:17:40.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.269 "strip_size_kb": 64, 00:17:40.269 "state": "configuring", 00:17:40.269 "raid_level": "raid5f", 00:17:40.269 "superblock": false, 00:17:40.269 "num_base_bdevs": 4, 00:17:40.269 "num_base_bdevs_discovered": 3, 00:17:40.269 "num_base_bdevs_operational": 4, 00:17:40.269 "base_bdevs_list": [ 00:17:40.269 { 00:17:40.269 "name": "BaseBdev1", 00:17:40.269 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:40.269 "is_configured": true, 00:17:40.269 "data_offset": 0, 00:17:40.269 "data_size": 65536 00:17:40.269 }, 00:17:40.269 { 00:17:40.269 "name": null, 00:17:40.269 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:40.269 "is_configured": false, 00:17:40.269 "data_offset": 0, 00:17:40.269 "data_size": 65536 00:17:40.269 }, 00:17:40.269 { 00:17:40.269 "name": "BaseBdev3", 00:17:40.269 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:40.269 "is_configured": true, 00:17:40.269 "data_offset": 0, 00:17:40.269 "data_size": 65536 00:17:40.269 }, 00:17:40.269 { 00:17:40.269 "name": "BaseBdev4", 00:17:40.269 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:40.269 "is_configured": true, 00:17:40.269 "data_offset": 0, 00:17:40.269 "data_size": 65536 00:17:40.269 } 00:17:40.269 ] 00:17:40.269 }' 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.269 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 [2024-11-27 09:54:41.863268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.837 "name": "Existed_Raid", 00:17:40.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.837 "strip_size_kb": 64, 00:17:40.837 "state": "configuring", 00:17:40.837 "raid_level": "raid5f", 00:17:40.837 "superblock": false, 00:17:40.837 "num_base_bdevs": 4, 00:17:40.837 "num_base_bdevs_discovered": 2, 00:17:40.837 "num_base_bdevs_operational": 4, 00:17:40.837 "base_bdevs_list": [ 00:17:40.837 { 00:17:40.837 "name": "BaseBdev1", 00:17:40.837 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:40.837 "is_configured": true, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 65536 00:17:40.837 }, 00:17:40.837 { 00:17:40.837 "name": null, 00:17:40.837 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:40.837 "is_configured": false, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 65536 00:17:40.837 }, 00:17:40.837 { 00:17:40.837 "name": null, 00:17:40.837 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:40.837 "is_configured": false, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 65536 00:17:40.837 }, 00:17:40.837 { 00:17:40.837 "name": "BaseBdev4", 00:17:40.837 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:40.837 "is_configured": true, 00:17:40.837 "data_offset": 0, 00:17:40.837 "data_size": 65536 00:17:40.837 } 00:17:40.837 ] 00:17:40.837 }' 00:17:40.837 09:54:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.838 09:54:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 [2024-11-27 09:54:42.362434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.406 "name": "Existed_Raid", 00:17:41.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.406 "strip_size_kb": 64, 00:17:41.406 "state": "configuring", 00:17:41.406 "raid_level": "raid5f", 00:17:41.406 "superblock": false, 00:17:41.406 "num_base_bdevs": 4, 00:17:41.406 "num_base_bdevs_discovered": 3, 00:17:41.406 "num_base_bdevs_operational": 4, 00:17:41.406 "base_bdevs_list": [ 00:17:41.406 { 00:17:41.406 "name": "BaseBdev1", 00:17:41.406 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:41.406 "is_configured": true, 00:17:41.406 "data_offset": 0, 00:17:41.406 "data_size": 65536 00:17:41.406 }, 00:17:41.406 { 00:17:41.406 "name": null, 00:17:41.406 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:41.406 "is_configured": false, 00:17:41.406 "data_offset": 0, 00:17:41.406 "data_size": 65536 00:17:41.406 }, 00:17:41.406 { 00:17:41.406 "name": "BaseBdev3", 00:17:41.406 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:41.406 "is_configured": true, 00:17:41.406 "data_offset": 0, 00:17:41.406 "data_size": 65536 00:17:41.406 }, 00:17:41.406 { 00:17:41.406 "name": "BaseBdev4", 00:17:41.406 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:41.406 "is_configured": true, 00:17:41.406 "data_offset": 0, 00:17:41.406 "data_size": 65536 00:17:41.406 } 00:17:41.406 ] 00:17:41.406 }' 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.406 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.665 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.665 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:41.665 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.665 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 [2024-11-27 09:54:42.841791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.925 "name": "Existed_Raid", 00:17:41.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.925 "strip_size_kb": 64, 00:17:41.925 "state": "configuring", 00:17:41.925 "raid_level": "raid5f", 00:17:41.925 "superblock": false, 00:17:41.925 "num_base_bdevs": 4, 00:17:41.925 "num_base_bdevs_discovered": 2, 00:17:41.925 "num_base_bdevs_operational": 4, 00:17:41.925 "base_bdevs_list": [ 00:17:41.925 { 00:17:41.925 "name": null, 00:17:41.925 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:41.925 "is_configured": false, 00:17:41.925 "data_offset": 0, 00:17:41.925 "data_size": 65536 00:17:41.925 }, 00:17:41.925 { 00:17:41.925 "name": null, 00:17:41.925 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:41.925 "is_configured": false, 00:17:41.925 "data_offset": 0, 00:17:41.925 "data_size": 65536 00:17:41.925 }, 00:17:41.925 { 00:17:41.925 "name": "BaseBdev3", 00:17:41.925 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:41.925 "is_configured": true, 00:17:41.925 "data_offset": 0, 00:17:41.925 "data_size": 65536 00:17:41.925 }, 00:17:41.925 { 00:17:41.925 "name": "BaseBdev4", 00:17:41.925 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:41.925 "is_configured": true, 00:17:41.925 "data_offset": 0, 00:17:41.925 "data_size": 65536 00:17:41.925 } 00:17:41.925 ] 00:17:41.925 }' 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.925 09:54:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.494 [2024-11-27 09:54:43.479483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.494 "name": "Existed_Raid", 00:17:42.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.494 "strip_size_kb": 64, 00:17:42.494 "state": "configuring", 00:17:42.494 "raid_level": "raid5f", 00:17:42.494 "superblock": false, 00:17:42.494 "num_base_bdevs": 4, 00:17:42.494 "num_base_bdevs_discovered": 3, 00:17:42.494 "num_base_bdevs_operational": 4, 00:17:42.494 "base_bdevs_list": [ 00:17:42.494 { 00:17:42.494 "name": null, 00:17:42.494 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:42.494 "is_configured": false, 00:17:42.494 "data_offset": 0, 00:17:42.494 "data_size": 65536 00:17:42.494 }, 00:17:42.494 { 00:17:42.494 "name": "BaseBdev2", 00:17:42.494 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:42.494 "is_configured": true, 00:17:42.494 "data_offset": 0, 00:17:42.494 "data_size": 65536 00:17:42.494 }, 00:17:42.494 { 00:17:42.494 "name": "BaseBdev3", 00:17:42.494 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:42.494 "is_configured": true, 00:17:42.494 "data_offset": 0, 00:17:42.494 "data_size": 65536 00:17:42.494 }, 00:17:42.494 { 00:17:42.494 "name": "BaseBdev4", 00:17:42.494 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:42.494 "is_configured": true, 00:17:42.494 "data_offset": 0, 00:17:42.494 "data_size": 65536 00:17:42.494 } 00:17:42.494 ] 00:17:42.494 }' 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.494 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:43.063 09:54:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f527053e-3a0c-417e-a335-244555ae9fc4 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 [2024-11-27 09:54:44.050530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:43.064 [2024-11-27 09:54:44.050641] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:43.064 [2024-11-27 09:54:44.050650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:43.064 [2024-11-27 09:54:44.050970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:43.064 [2024-11-27 09:54:44.057995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:43.064 [2024-11-27 09:54:44.058066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:43.064 [2024-11-27 09:54:44.058445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.064 NewBaseBdev 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 [ 00:17:43.064 { 00:17:43.064 "name": "NewBaseBdev", 00:17:43.064 "aliases": [ 00:17:43.064 "f527053e-3a0c-417e-a335-244555ae9fc4" 00:17:43.064 ], 00:17:43.064 "product_name": "Malloc disk", 00:17:43.064 "block_size": 512, 00:17:43.064 "num_blocks": 65536, 00:17:43.064 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:43.064 "assigned_rate_limits": { 00:17:43.064 "rw_ios_per_sec": 0, 00:17:43.064 "rw_mbytes_per_sec": 0, 00:17:43.064 "r_mbytes_per_sec": 0, 00:17:43.064 "w_mbytes_per_sec": 0 00:17:43.064 }, 00:17:43.064 "claimed": true, 00:17:43.064 "claim_type": "exclusive_write", 00:17:43.064 "zoned": false, 00:17:43.064 "supported_io_types": { 00:17:43.064 "read": true, 00:17:43.064 "write": true, 00:17:43.064 "unmap": true, 00:17:43.064 "flush": true, 00:17:43.064 "reset": true, 00:17:43.064 "nvme_admin": false, 00:17:43.064 "nvme_io": false, 00:17:43.064 "nvme_io_md": false, 00:17:43.064 "write_zeroes": true, 00:17:43.064 "zcopy": true, 00:17:43.064 "get_zone_info": false, 00:17:43.064 "zone_management": false, 00:17:43.064 "zone_append": false, 00:17:43.064 "compare": false, 00:17:43.064 "compare_and_write": false, 00:17:43.064 "abort": true, 00:17:43.064 "seek_hole": false, 00:17:43.064 "seek_data": false, 00:17:43.064 "copy": true, 00:17:43.064 "nvme_iov_md": false 00:17:43.064 }, 00:17:43.064 "memory_domains": [ 00:17:43.064 { 00:17:43.064 "dma_device_id": "system", 00:17:43.064 "dma_device_type": 1 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:43.064 "dma_device_type": 2 00:17:43.064 } 00:17:43.064 ], 00:17:43.064 "driver_specific": {} 00:17:43.064 } 00:17:43.064 ] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.064 "name": "Existed_Raid", 00:17:43.064 "uuid": "cb2e19a1-796c-44b5-90c8-31afdd267376", 00:17:43.064 "strip_size_kb": 64, 00:17:43.064 "state": "online", 00:17:43.064 "raid_level": "raid5f", 00:17:43.064 "superblock": false, 00:17:43.064 "num_base_bdevs": 4, 00:17:43.064 "num_base_bdevs_discovered": 4, 00:17:43.064 "num_base_bdevs_operational": 4, 00:17:43.064 "base_bdevs_list": [ 00:17:43.064 { 00:17:43.064 "name": "NewBaseBdev", 00:17:43.064 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 0, 00:17:43.064 "data_size": 65536 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "name": "BaseBdev2", 00:17:43.064 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 0, 00:17:43.064 "data_size": 65536 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "name": "BaseBdev3", 00:17:43.064 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 0, 00:17:43.064 "data_size": 65536 00:17:43.064 }, 00:17:43.064 { 00:17:43.064 "name": "BaseBdev4", 00:17:43.064 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:43.064 "is_configured": true, 00:17:43.064 "data_offset": 0, 00:17:43.064 "data_size": 65536 00:17:43.064 } 00:17:43.064 ] 00:17:43.064 }' 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.064 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.633 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:43.633 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:43.633 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:43.633 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:43.633 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.634 [2024-11-27 09:54:44.571688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:43.634 "name": "Existed_Raid", 00:17:43.634 "aliases": [ 00:17:43.634 "cb2e19a1-796c-44b5-90c8-31afdd267376" 00:17:43.634 ], 00:17:43.634 "product_name": "Raid Volume", 00:17:43.634 "block_size": 512, 00:17:43.634 "num_blocks": 196608, 00:17:43.634 "uuid": "cb2e19a1-796c-44b5-90c8-31afdd267376", 00:17:43.634 "assigned_rate_limits": { 00:17:43.634 "rw_ios_per_sec": 0, 00:17:43.634 "rw_mbytes_per_sec": 0, 00:17:43.634 "r_mbytes_per_sec": 0, 00:17:43.634 "w_mbytes_per_sec": 0 00:17:43.634 }, 00:17:43.634 "claimed": false, 00:17:43.634 "zoned": false, 00:17:43.634 "supported_io_types": { 00:17:43.634 "read": true, 00:17:43.634 "write": true, 00:17:43.634 "unmap": false, 00:17:43.634 "flush": false, 00:17:43.634 "reset": true, 00:17:43.634 "nvme_admin": false, 00:17:43.634 "nvme_io": false, 00:17:43.634 "nvme_io_md": false, 00:17:43.634 "write_zeroes": true, 00:17:43.634 "zcopy": false, 00:17:43.634 "get_zone_info": false, 00:17:43.634 "zone_management": false, 00:17:43.634 "zone_append": false, 00:17:43.634 "compare": false, 00:17:43.634 "compare_and_write": false, 00:17:43.634 "abort": false, 00:17:43.634 "seek_hole": false, 00:17:43.634 "seek_data": false, 00:17:43.634 "copy": false, 00:17:43.634 "nvme_iov_md": false 00:17:43.634 }, 00:17:43.634 "driver_specific": { 00:17:43.634 "raid": { 00:17:43.634 "uuid": "cb2e19a1-796c-44b5-90c8-31afdd267376", 00:17:43.634 "strip_size_kb": 64, 00:17:43.634 "state": "online", 00:17:43.634 "raid_level": "raid5f", 00:17:43.634 "superblock": false, 00:17:43.634 "num_base_bdevs": 4, 00:17:43.634 "num_base_bdevs_discovered": 4, 00:17:43.634 "num_base_bdevs_operational": 4, 00:17:43.634 "base_bdevs_list": [ 00:17:43.634 { 00:17:43.634 "name": "NewBaseBdev", 00:17:43.634 "uuid": "f527053e-3a0c-417e-a335-244555ae9fc4", 00:17:43.634 "is_configured": true, 00:17:43.634 "data_offset": 0, 00:17:43.634 "data_size": 65536 00:17:43.634 }, 00:17:43.634 { 00:17:43.634 "name": "BaseBdev2", 00:17:43.634 "uuid": "808589b9-1567-499c-b103-07a985cc88fe", 00:17:43.634 "is_configured": true, 00:17:43.634 "data_offset": 0, 00:17:43.634 "data_size": 65536 00:17:43.634 }, 00:17:43.634 { 00:17:43.634 "name": "BaseBdev3", 00:17:43.634 "uuid": "032727bc-7124-4880-a413-d3a9b0b94d56", 00:17:43.634 "is_configured": true, 00:17:43.634 "data_offset": 0, 00:17:43.634 "data_size": 65536 00:17:43.634 }, 00:17:43.634 { 00:17:43.634 "name": "BaseBdev4", 00:17:43.634 "uuid": "7363b540-393d-4f3f-88ff-4b8b2cfed23d", 00:17:43.634 "is_configured": true, 00:17:43.634 "data_offset": 0, 00:17:43.634 "data_size": 65536 00:17:43.634 } 00:17:43.634 ] 00:17:43.634 } 00:17:43.634 } 00:17:43.634 }' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:43.634 BaseBdev2 00:17:43.634 BaseBdev3 00:17:43.634 BaseBdev4' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.634 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.894 [2024-11-27 09:54:44.850947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.894 [2024-11-27 09:54:44.851012] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.894 [2024-11-27 09:54:44.851141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.894 [2024-11-27 09:54:44.851565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.894 [2024-11-27 09:54:44.851589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83086 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83086 ']' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83086 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83086 00:17:43.894 killing process with pid 83086 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83086' 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83086 00:17:43.894 [2024-11-27 09:54:44.898960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:43.894 09:54:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83086 00:17:44.464 [2024-11-27 09:54:45.335445] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:45.844 09:54:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:45.844 00:17:45.844 real 0m12.084s 00:17:45.844 user 0m18.653s 00:17:45.844 sys 0m2.596s 00:17:45.844 09:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.844 09:54:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.844 ************************************ 00:17:45.844 END TEST raid5f_state_function_test 00:17:45.844 ************************************ 00:17:45.844 09:54:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:17:45.844 09:54:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:45.845 09:54:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.845 09:54:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:45.845 ************************************ 00:17:45.845 START TEST raid5f_state_function_test_sb 00:17:45.845 ************************************ 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83759 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83759' 00:17:45.845 Process raid pid: 83759 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83759 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83759 ']' 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.845 09:54:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.845 [2024-11-27 09:54:46.760167] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:17:45.845 [2024-11-27 09:54:46.760316] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.845 [2024-11-27 09:54:46.940816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.104 [2024-11-27 09:54:47.085973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.363 [2024-11-27 09:54:47.333409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.363 [2024-11-27 09:54:47.333514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 [2024-11-27 09:54:47.619503] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:46.623 [2024-11-27 09:54:47.619587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:46.623 [2024-11-27 09:54:47.619604] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.623 [2024-11-27 09:54:47.619618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.623 [2024-11-27 09:54:47.619629] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:46.623 [2024-11-27 09:54:47.619642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:46.623 [2024-11-27 09:54:47.619653] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:46.623 [2024-11-27 09:54:47.619666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.623 "name": "Existed_Raid", 00:17:46.623 "uuid": "fdf95929-c8d1-4a54-b9e9-6082d86d6073", 00:17:46.623 "strip_size_kb": 64, 00:17:46.623 "state": "configuring", 00:17:46.623 "raid_level": "raid5f", 00:17:46.623 "superblock": true, 00:17:46.623 "num_base_bdevs": 4, 00:17:46.623 "num_base_bdevs_discovered": 0, 00:17:46.623 "num_base_bdevs_operational": 4, 00:17:46.623 "base_bdevs_list": [ 00:17:46.623 { 00:17:46.623 "name": "BaseBdev1", 00:17:46.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.623 "is_configured": false, 00:17:46.623 "data_offset": 0, 00:17:46.623 "data_size": 0 00:17:46.623 }, 00:17:46.623 { 00:17:46.623 "name": "BaseBdev2", 00:17:46.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.623 "is_configured": false, 00:17:46.623 "data_offset": 0, 00:17:46.623 "data_size": 0 00:17:46.623 }, 00:17:46.623 { 00:17:46.623 "name": "BaseBdev3", 00:17:46.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.623 "is_configured": false, 00:17:46.623 "data_offset": 0, 00:17:46.623 "data_size": 0 00:17:46.623 }, 00:17:46.623 { 00:17:46.623 "name": "BaseBdev4", 00:17:46.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.623 "is_configured": false, 00:17:46.623 "data_offset": 0, 00:17:46.623 "data_size": 0 00:17:46.623 } 00:17:46.623 ] 00:17:46.623 }' 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.623 09:54:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.190 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:47.190 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.190 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.190 [2024-11-27 09:54:48.038701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.190 [2024-11-27 09:54:48.038770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:47.190 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.191 [2024-11-27 09:54:48.050693] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:47.191 [2024-11-27 09:54:48.050760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:47.191 [2024-11-27 09:54:48.050772] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.191 [2024-11-27 09:54:48.050784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.191 [2024-11-27 09:54:48.050793] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.191 [2024-11-27 09:54:48.050806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.191 [2024-11-27 09:54:48.050814] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:47.191 [2024-11-27 09:54:48.050825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.191 [2024-11-27 09:54:48.105982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.191 BaseBdev1 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.191 [ 00:17:47.191 { 00:17:47.191 "name": "BaseBdev1", 00:17:47.191 "aliases": [ 00:17:47.191 "867743fd-a22d-421d-84c0-9c5747e14213" 00:17:47.191 ], 00:17:47.191 "product_name": "Malloc disk", 00:17:47.191 "block_size": 512, 00:17:47.191 "num_blocks": 65536, 00:17:47.191 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:47.191 "assigned_rate_limits": { 00:17:47.191 "rw_ios_per_sec": 0, 00:17:47.191 "rw_mbytes_per_sec": 0, 00:17:47.191 "r_mbytes_per_sec": 0, 00:17:47.191 "w_mbytes_per_sec": 0 00:17:47.191 }, 00:17:47.191 "claimed": true, 00:17:47.191 "claim_type": "exclusive_write", 00:17:47.191 "zoned": false, 00:17:47.191 "supported_io_types": { 00:17:47.191 "read": true, 00:17:47.191 "write": true, 00:17:47.191 "unmap": true, 00:17:47.191 "flush": true, 00:17:47.191 "reset": true, 00:17:47.191 "nvme_admin": false, 00:17:47.191 "nvme_io": false, 00:17:47.191 "nvme_io_md": false, 00:17:47.191 "write_zeroes": true, 00:17:47.191 "zcopy": true, 00:17:47.191 "get_zone_info": false, 00:17:47.191 "zone_management": false, 00:17:47.191 "zone_append": false, 00:17:47.191 "compare": false, 00:17:47.191 "compare_and_write": false, 00:17:47.191 "abort": true, 00:17:47.191 "seek_hole": false, 00:17:47.191 "seek_data": false, 00:17:47.191 "copy": true, 00:17:47.191 "nvme_iov_md": false 00:17:47.191 }, 00:17:47.191 "memory_domains": [ 00:17:47.191 { 00:17:47.191 "dma_device_id": "system", 00:17:47.191 "dma_device_type": 1 00:17:47.191 }, 00:17:47.191 { 00:17:47.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.191 "dma_device_type": 2 00:17:47.191 } 00:17:47.191 ], 00:17:47.191 "driver_specific": {} 00:17:47.191 } 00:17:47.191 ] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.191 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.191 "name": "Existed_Raid", 00:17:47.191 "uuid": "60b33314-a5b0-46d0-b04d-371219e64ac4", 00:17:47.191 "strip_size_kb": 64, 00:17:47.191 "state": "configuring", 00:17:47.191 "raid_level": "raid5f", 00:17:47.192 "superblock": true, 00:17:47.192 "num_base_bdevs": 4, 00:17:47.192 "num_base_bdevs_discovered": 1, 00:17:47.192 "num_base_bdevs_operational": 4, 00:17:47.192 "base_bdevs_list": [ 00:17:47.192 { 00:17:47.192 "name": "BaseBdev1", 00:17:47.192 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:47.192 "is_configured": true, 00:17:47.192 "data_offset": 2048, 00:17:47.192 "data_size": 63488 00:17:47.192 }, 00:17:47.192 { 00:17:47.192 "name": "BaseBdev2", 00:17:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.192 "is_configured": false, 00:17:47.192 "data_offset": 0, 00:17:47.192 "data_size": 0 00:17:47.192 }, 00:17:47.192 { 00:17:47.192 "name": "BaseBdev3", 00:17:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.192 "is_configured": false, 00:17:47.192 "data_offset": 0, 00:17:47.192 "data_size": 0 00:17:47.192 }, 00:17:47.192 { 00:17:47.192 "name": "BaseBdev4", 00:17:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.192 "is_configured": false, 00:17:47.192 "data_offset": 0, 00:17:47.192 "data_size": 0 00:17:47.192 } 00:17:47.192 ] 00:17:47.192 }' 00:17:47.192 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.192 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.761 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:47.761 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.761 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.762 [2024-11-27 09:54:48.657138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:47.762 [2024-11-27 09:54:48.657226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.762 [2024-11-27 09:54:48.669205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.762 [2024-11-27 09:54:48.671483] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:47.762 [2024-11-27 09:54:48.671540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:47.762 [2024-11-27 09:54:48.671552] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:47.762 [2024-11-27 09:54:48.671565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:47.762 [2024-11-27 09:54:48.671574] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:47.762 [2024-11-27 09:54:48.671585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.762 "name": "Existed_Raid", 00:17:47.762 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:47.762 "strip_size_kb": 64, 00:17:47.762 "state": "configuring", 00:17:47.762 "raid_level": "raid5f", 00:17:47.762 "superblock": true, 00:17:47.762 "num_base_bdevs": 4, 00:17:47.762 "num_base_bdevs_discovered": 1, 00:17:47.762 "num_base_bdevs_operational": 4, 00:17:47.762 "base_bdevs_list": [ 00:17:47.762 { 00:17:47.762 "name": "BaseBdev1", 00:17:47.762 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:47.762 "is_configured": true, 00:17:47.762 "data_offset": 2048, 00:17:47.762 "data_size": 63488 00:17:47.762 }, 00:17:47.762 { 00:17:47.762 "name": "BaseBdev2", 00:17:47.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.762 "is_configured": false, 00:17:47.762 "data_offset": 0, 00:17:47.762 "data_size": 0 00:17:47.762 }, 00:17:47.762 { 00:17:47.762 "name": "BaseBdev3", 00:17:47.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.762 "is_configured": false, 00:17:47.762 "data_offset": 0, 00:17:47.762 "data_size": 0 00:17:47.762 }, 00:17:47.762 { 00:17:47.762 "name": "BaseBdev4", 00:17:47.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.762 "is_configured": false, 00:17:47.762 "data_offset": 0, 00:17:47.762 "data_size": 0 00:17:47.762 } 00:17:47.762 ] 00:17:47.762 }' 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.762 09:54:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.023 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:48.023 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.023 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.281 [2024-11-27 09:54:49.186069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:48.281 BaseBdev2 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.281 [ 00:17:48.281 { 00:17:48.281 "name": "BaseBdev2", 00:17:48.281 "aliases": [ 00:17:48.281 "597d91e6-0f1d-4241-b305-9f03b43392f4" 00:17:48.281 ], 00:17:48.281 "product_name": "Malloc disk", 00:17:48.281 "block_size": 512, 00:17:48.281 "num_blocks": 65536, 00:17:48.281 "uuid": "597d91e6-0f1d-4241-b305-9f03b43392f4", 00:17:48.281 "assigned_rate_limits": { 00:17:48.281 "rw_ios_per_sec": 0, 00:17:48.281 "rw_mbytes_per_sec": 0, 00:17:48.281 "r_mbytes_per_sec": 0, 00:17:48.281 "w_mbytes_per_sec": 0 00:17:48.281 }, 00:17:48.281 "claimed": true, 00:17:48.281 "claim_type": "exclusive_write", 00:17:48.281 "zoned": false, 00:17:48.281 "supported_io_types": { 00:17:48.281 "read": true, 00:17:48.281 "write": true, 00:17:48.281 "unmap": true, 00:17:48.281 "flush": true, 00:17:48.281 "reset": true, 00:17:48.281 "nvme_admin": false, 00:17:48.281 "nvme_io": false, 00:17:48.281 "nvme_io_md": false, 00:17:48.281 "write_zeroes": true, 00:17:48.281 "zcopy": true, 00:17:48.281 "get_zone_info": false, 00:17:48.281 "zone_management": false, 00:17:48.281 "zone_append": false, 00:17:48.281 "compare": false, 00:17:48.281 "compare_and_write": false, 00:17:48.281 "abort": true, 00:17:48.281 "seek_hole": false, 00:17:48.281 "seek_data": false, 00:17:48.281 "copy": true, 00:17:48.281 "nvme_iov_md": false 00:17:48.281 }, 00:17:48.281 "memory_domains": [ 00:17:48.281 { 00:17:48.281 "dma_device_id": "system", 00:17:48.281 "dma_device_type": 1 00:17:48.281 }, 00:17:48.281 { 00:17:48.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.281 "dma_device_type": 2 00:17:48.281 } 00:17:48.281 ], 00:17:48.281 "driver_specific": {} 00:17:48.281 } 00:17:48.281 ] 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.281 "name": "Existed_Raid", 00:17:48.281 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:48.281 "strip_size_kb": 64, 00:17:48.281 "state": "configuring", 00:17:48.281 "raid_level": "raid5f", 00:17:48.281 "superblock": true, 00:17:48.281 "num_base_bdevs": 4, 00:17:48.281 "num_base_bdevs_discovered": 2, 00:17:48.281 "num_base_bdevs_operational": 4, 00:17:48.281 "base_bdevs_list": [ 00:17:48.281 { 00:17:48.281 "name": "BaseBdev1", 00:17:48.281 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:48.281 "is_configured": true, 00:17:48.281 "data_offset": 2048, 00:17:48.281 "data_size": 63488 00:17:48.281 }, 00:17:48.281 { 00:17:48.281 "name": "BaseBdev2", 00:17:48.281 "uuid": "597d91e6-0f1d-4241-b305-9f03b43392f4", 00:17:48.281 "is_configured": true, 00:17:48.281 "data_offset": 2048, 00:17:48.281 "data_size": 63488 00:17:48.281 }, 00:17:48.281 { 00:17:48.281 "name": "BaseBdev3", 00:17:48.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.281 "is_configured": false, 00:17:48.281 "data_offset": 0, 00:17:48.281 "data_size": 0 00:17:48.281 }, 00:17:48.281 { 00:17:48.281 "name": "BaseBdev4", 00:17:48.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.281 "is_configured": false, 00:17:48.281 "data_offset": 0, 00:17:48.281 "data_size": 0 00:17:48.281 } 00:17:48.281 ] 00:17:48.281 }' 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.281 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.849 [2024-11-27 09:54:49.732993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:48.849 BaseBdev3 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.849 [ 00:17:48.849 { 00:17:48.849 "name": "BaseBdev3", 00:17:48.849 "aliases": [ 00:17:48.849 "e789bd53-5ea8-4df1-9c6a-cd61f4ce6fb8" 00:17:48.849 ], 00:17:48.849 "product_name": "Malloc disk", 00:17:48.849 "block_size": 512, 00:17:48.849 "num_blocks": 65536, 00:17:48.849 "uuid": "e789bd53-5ea8-4df1-9c6a-cd61f4ce6fb8", 00:17:48.849 "assigned_rate_limits": { 00:17:48.849 "rw_ios_per_sec": 0, 00:17:48.849 "rw_mbytes_per_sec": 0, 00:17:48.849 "r_mbytes_per_sec": 0, 00:17:48.849 "w_mbytes_per_sec": 0 00:17:48.849 }, 00:17:48.849 "claimed": true, 00:17:48.849 "claim_type": "exclusive_write", 00:17:48.849 "zoned": false, 00:17:48.849 "supported_io_types": { 00:17:48.849 "read": true, 00:17:48.849 "write": true, 00:17:48.849 "unmap": true, 00:17:48.849 "flush": true, 00:17:48.849 "reset": true, 00:17:48.849 "nvme_admin": false, 00:17:48.849 "nvme_io": false, 00:17:48.849 "nvme_io_md": false, 00:17:48.849 "write_zeroes": true, 00:17:48.849 "zcopy": true, 00:17:48.849 "get_zone_info": false, 00:17:48.849 "zone_management": false, 00:17:48.849 "zone_append": false, 00:17:48.849 "compare": false, 00:17:48.849 "compare_and_write": false, 00:17:48.849 "abort": true, 00:17:48.849 "seek_hole": false, 00:17:48.849 "seek_data": false, 00:17:48.849 "copy": true, 00:17:48.849 "nvme_iov_md": false 00:17:48.849 }, 00:17:48.849 "memory_domains": [ 00:17:48.849 { 00:17:48.849 "dma_device_id": "system", 00:17:48.849 "dma_device_type": 1 00:17:48.849 }, 00:17:48.849 { 00:17:48.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.849 "dma_device_type": 2 00:17:48.849 } 00:17:48.849 ], 00:17:48.849 "driver_specific": {} 00:17:48.849 } 00:17:48.849 ] 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:48.849 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.850 "name": "Existed_Raid", 00:17:48.850 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:48.850 "strip_size_kb": 64, 00:17:48.850 "state": "configuring", 00:17:48.850 "raid_level": "raid5f", 00:17:48.850 "superblock": true, 00:17:48.850 "num_base_bdevs": 4, 00:17:48.850 "num_base_bdevs_discovered": 3, 00:17:48.850 "num_base_bdevs_operational": 4, 00:17:48.850 "base_bdevs_list": [ 00:17:48.850 { 00:17:48.850 "name": "BaseBdev1", 00:17:48.850 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:48.850 "is_configured": true, 00:17:48.850 "data_offset": 2048, 00:17:48.850 "data_size": 63488 00:17:48.850 }, 00:17:48.850 { 00:17:48.850 "name": "BaseBdev2", 00:17:48.850 "uuid": "597d91e6-0f1d-4241-b305-9f03b43392f4", 00:17:48.850 "is_configured": true, 00:17:48.850 "data_offset": 2048, 00:17:48.850 "data_size": 63488 00:17:48.850 }, 00:17:48.850 { 00:17:48.850 "name": "BaseBdev3", 00:17:48.850 "uuid": "e789bd53-5ea8-4df1-9c6a-cd61f4ce6fb8", 00:17:48.850 "is_configured": true, 00:17:48.850 "data_offset": 2048, 00:17:48.850 "data_size": 63488 00:17:48.850 }, 00:17:48.850 { 00:17:48.850 "name": "BaseBdev4", 00:17:48.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.850 "is_configured": false, 00:17:48.850 "data_offset": 0, 00:17:48.850 "data_size": 0 00:17:48.850 } 00:17:48.850 ] 00:17:48.850 }' 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.850 09:54:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.109 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:49.109 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.109 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.375 [2024-11-27 09:54:50.255540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:49.375 [2024-11-27 09:54:50.255953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:49.375 [2024-11-27 09:54:50.255972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:49.375 [2024-11-27 09:54:50.256487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:49.375 BaseBdev4 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.375 [2024-11-27 09:54:50.264036] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:49.375 [2024-11-27 09:54:50.264126] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:49.375 [2024-11-27 09:54:50.264580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.375 [ 00:17:49.375 { 00:17:49.375 "name": "BaseBdev4", 00:17:49.375 "aliases": [ 00:17:49.375 "68d7d2ee-1a54-4ac5-a1ec-eb15bed532e0" 00:17:49.375 ], 00:17:49.375 "product_name": "Malloc disk", 00:17:49.375 "block_size": 512, 00:17:49.375 "num_blocks": 65536, 00:17:49.375 "uuid": "68d7d2ee-1a54-4ac5-a1ec-eb15bed532e0", 00:17:49.375 "assigned_rate_limits": { 00:17:49.375 "rw_ios_per_sec": 0, 00:17:49.375 "rw_mbytes_per_sec": 0, 00:17:49.375 "r_mbytes_per_sec": 0, 00:17:49.375 "w_mbytes_per_sec": 0 00:17:49.375 }, 00:17:49.375 "claimed": true, 00:17:49.375 "claim_type": "exclusive_write", 00:17:49.375 "zoned": false, 00:17:49.375 "supported_io_types": { 00:17:49.375 "read": true, 00:17:49.375 "write": true, 00:17:49.375 "unmap": true, 00:17:49.375 "flush": true, 00:17:49.375 "reset": true, 00:17:49.375 "nvme_admin": false, 00:17:49.375 "nvme_io": false, 00:17:49.375 "nvme_io_md": false, 00:17:49.375 "write_zeroes": true, 00:17:49.375 "zcopy": true, 00:17:49.375 "get_zone_info": false, 00:17:49.375 "zone_management": false, 00:17:49.375 "zone_append": false, 00:17:49.375 "compare": false, 00:17:49.375 "compare_and_write": false, 00:17:49.375 "abort": true, 00:17:49.375 "seek_hole": false, 00:17:49.375 "seek_data": false, 00:17:49.375 "copy": true, 00:17:49.375 "nvme_iov_md": false 00:17:49.375 }, 00:17:49.375 "memory_domains": [ 00:17:49.375 { 00:17:49.375 "dma_device_id": "system", 00:17:49.375 "dma_device_type": 1 00:17:49.375 }, 00:17:49.375 { 00:17:49.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.375 "dma_device_type": 2 00:17:49.375 } 00:17:49.375 ], 00:17:49.375 "driver_specific": {} 00:17:49.375 } 00:17:49.375 ] 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.375 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.375 "name": "Existed_Raid", 00:17:49.375 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:49.375 "strip_size_kb": 64, 00:17:49.375 "state": "online", 00:17:49.375 "raid_level": "raid5f", 00:17:49.375 "superblock": true, 00:17:49.375 "num_base_bdevs": 4, 00:17:49.375 "num_base_bdevs_discovered": 4, 00:17:49.375 "num_base_bdevs_operational": 4, 00:17:49.375 "base_bdevs_list": [ 00:17:49.375 { 00:17:49.375 "name": "BaseBdev1", 00:17:49.375 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:49.375 "is_configured": true, 00:17:49.375 "data_offset": 2048, 00:17:49.376 "data_size": 63488 00:17:49.376 }, 00:17:49.376 { 00:17:49.376 "name": "BaseBdev2", 00:17:49.376 "uuid": "597d91e6-0f1d-4241-b305-9f03b43392f4", 00:17:49.376 "is_configured": true, 00:17:49.376 "data_offset": 2048, 00:17:49.376 "data_size": 63488 00:17:49.376 }, 00:17:49.376 { 00:17:49.376 "name": "BaseBdev3", 00:17:49.376 "uuid": "e789bd53-5ea8-4df1-9c6a-cd61f4ce6fb8", 00:17:49.376 "is_configured": true, 00:17:49.376 "data_offset": 2048, 00:17:49.376 "data_size": 63488 00:17:49.376 }, 00:17:49.376 { 00:17:49.376 "name": "BaseBdev4", 00:17:49.376 "uuid": "68d7d2ee-1a54-4ac5-a1ec-eb15bed532e0", 00:17:49.376 "is_configured": true, 00:17:49.376 "data_offset": 2048, 00:17:49.376 "data_size": 63488 00:17:49.376 } 00:17:49.376 ] 00:17:49.376 }' 00:17:49.376 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.376 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.647 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:49.647 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:49.648 [2024-11-27 09:54:50.737878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.648 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:49.648 "name": "Existed_Raid", 00:17:49.648 "aliases": [ 00:17:49.648 "8443853e-7641-4507-8874-f8e22d9db5e5" 00:17:49.648 ], 00:17:49.648 "product_name": "Raid Volume", 00:17:49.648 "block_size": 512, 00:17:49.648 "num_blocks": 190464, 00:17:49.648 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:49.648 "assigned_rate_limits": { 00:17:49.648 "rw_ios_per_sec": 0, 00:17:49.648 "rw_mbytes_per_sec": 0, 00:17:49.648 "r_mbytes_per_sec": 0, 00:17:49.648 "w_mbytes_per_sec": 0 00:17:49.648 }, 00:17:49.648 "claimed": false, 00:17:49.648 "zoned": false, 00:17:49.648 "supported_io_types": { 00:17:49.648 "read": true, 00:17:49.648 "write": true, 00:17:49.648 "unmap": false, 00:17:49.648 "flush": false, 00:17:49.648 "reset": true, 00:17:49.648 "nvme_admin": false, 00:17:49.648 "nvme_io": false, 00:17:49.648 "nvme_io_md": false, 00:17:49.648 "write_zeroes": true, 00:17:49.648 "zcopy": false, 00:17:49.648 "get_zone_info": false, 00:17:49.648 "zone_management": false, 00:17:49.648 "zone_append": false, 00:17:49.648 "compare": false, 00:17:49.648 "compare_and_write": false, 00:17:49.648 "abort": false, 00:17:49.648 "seek_hole": false, 00:17:49.648 "seek_data": false, 00:17:49.648 "copy": false, 00:17:49.648 "nvme_iov_md": false 00:17:49.648 }, 00:17:49.648 "driver_specific": { 00:17:49.648 "raid": { 00:17:49.648 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:49.648 "strip_size_kb": 64, 00:17:49.648 "state": "online", 00:17:49.648 "raid_level": "raid5f", 00:17:49.648 "superblock": true, 00:17:49.648 "num_base_bdevs": 4, 00:17:49.648 "num_base_bdevs_discovered": 4, 00:17:49.648 "num_base_bdevs_operational": 4, 00:17:49.648 "base_bdevs_list": [ 00:17:49.648 { 00:17:49.648 "name": "BaseBdev1", 00:17:49.648 "uuid": "867743fd-a22d-421d-84c0-9c5747e14213", 00:17:49.648 "is_configured": true, 00:17:49.648 "data_offset": 2048, 00:17:49.648 "data_size": 63488 00:17:49.648 }, 00:17:49.648 { 00:17:49.648 "name": "BaseBdev2", 00:17:49.648 "uuid": "597d91e6-0f1d-4241-b305-9f03b43392f4", 00:17:49.648 "is_configured": true, 00:17:49.648 "data_offset": 2048, 00:17:49.648 "data_size": 63488 00:17:49.648 }, 00:17:49.648 { 00:17:49.648 "name": "BaseBdev3", 00:17:49.648 "uuid": "e789bd53-5ea8-4df1-9c6a-cd61f4ce6fb8", 00:17:49.648 "is_configured": true, 00:17:49.648 "data_offset": 2048, 00:17:49.648 "data_size": 63488 00:17:49.648 }, 00:17:49.648 { 00:17:49.648 "name": "BaseBdev4", 00:17:49.648 "uuid": "68d7d2ee-1a54-4ac5-a1ec-eb15bed532e0", 00:17:49.648 "is_configured": true, 00:17:49.648 "data_offset": 2048, 00:17:49.648 "data_size": 63488 00:17:49.648 } 00:17:49.648 ] 00:17:49.648 } 00:17:49.648 } 00:17:49.648 }' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:49.908 BaseBdev2 00:17:49.908 BaseBdev3 00:17:49.908 BaseBdev4' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:49.908 09:54:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:49.908 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:49.908 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:49.908 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.908 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.908 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.167 [2024-11-27 09:54:51.049242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.167 "name": "Existed_Raid", 00:17:50.167 "uuid": "8443853e-7641-4507-8874-f8e22d9db5e5", 00:17:50.167 "strip_size_kb": 64, 00:17:50.167 "state": "online", 00:17:50.167 "raid_level": "raid5f", 00:17:50.167 "superblock": true, 00:17:50.167 "num_base_bdevs": 4, 00:17:50.167 "num_base_bdevs_discovered": 3, 00:17:50.167 "num_base_bdevs_operational": 3, 00:17:50.167 "base_bdevs_list": [ 00:17:50.167 { 00:17:50.167 "name": null, 00:17:50.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.167 "is_configured": false, 00:17:50.167 "data_offset": 0, 00:17:50.167 "data_size": 63488 00:17:50.167 }, 00:17:50.167 { 00:17:50.167 "name": "BaseBdev2", 00:17:50.167 "uuid": "597d91e6-0f1d-4241-b305-9f03b43392f4", 00:17:50.167 "is_configured": true, 00:17:50.167 "data_offset": 2048, 00:17:50.167 "data_size": 63488 00:17:50.167 }, 00:17:50.167 { 00:17:50.167 "name": "BaseBdev3", 00:17:50.167 "uuid": "e789bd53-5ea8-4df1-9c6a-cd61f4ce6fb8", 00:17:50.167 "is_configured": true, 00:17:50.167 "data_offset": 2048, 00:17:50.167 "data_size": 63488 00:17:50.167 }, 00:17:50.167 { 00:17:50.167 "name": "BaseBdev4", 00:17:50.167 "uuid": "68d7d2ee-1a54-4ac5-a1ec-eb15bed532e0", 00:17:50.167 "is_configured": true, 00:17:50.167 "data_offset": 2048, 00:17:50.167 "data_size": 63488 00:17:50.167 } 00:17:50.167 ] 00:17:50.167 }' 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.167 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.736 [2024-11-27 09:54:51.676107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.736 [2024-11-27 09:54:51.676348] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.736 [2024-11-27 09:54:51.783898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.736 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.736 [2024-11-27 09:54:51.843858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.996 09:54:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.996 [2024-11-27 09:54:52.001454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:50.996 [2024-11-27 09:54:52.001635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.996 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.256 BaseBdev2 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.256 [ 00:17:51.256 { 00:17:51.256 "name": "BaseBdev2", 00:17:51.256 "aliases": [ 00:17:51.256 "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7" 00:17:51.256 ], 00:17:51.256 "product_name": "Malloc disk", 00:17:51.256 "block_size": 512, 00:17:51.256 "num_blocks": 65536, 00:17:51.256 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:51.256 "assigned_rate_limits": { 00:17:51.256 "rw_ios_per_sec": 0, 00:17:51.256 "rw_mbytes_per_sec": 0, 00:17:51.256 "r_mbytes_per_sec": 0, 00:17:51.256 "w_mbytes_per_sec": 0 00:17:51.256 }, 00:17:51.256 "claimed": false, 00:17:51.256 "zoned": false, 00:17:51.256 "supported_io_types": { 00:17:51.256 "read": true, 00:17:51.256 "write": true, 00:17:51.256 "unmap": true, 00:17:51.256 "flush": true, 00:17:51.256 "reset": true, 00:17:51.256 "nvme_admin": false, 00:17:51.256 "nvme_io": false, 00:17:51.256 "nvme_io_md": false, 00:17:51.256 "write_zeroes": true, 00:17:51.256 "zcopy": true, 00:17:51.256 "get_zone_info": false, 00:17:51.256 "zone_management": false, 00:17:51.256 "zone_append": false, 00:17:51.256 "compare": false, 00:17:51.256 "compare_and_write": false, 00:17:51.256 "abort": true, 00:17:51.256 "seek_hole": false, 00:17:51.256 "seek_data": false, 00:17:51.256 "copy": true, 00:17:51.256 "nvme_iov_md": false 00:17:51.256 }, 00:17:51.256 "memory_domains": [ 00:17:51.256 { 00:17:51.256 "dma_device_id": "system", 00:17:51.256 "dma_device_type": 1 00:17:51.256 }, 00:17:51.256 { 00:17:51.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.256 "dma_device_type": 2 00:17:51.256 } 00:17:51.256 ], 00:17:51.256 "driver_specific": {} 00:17:51.256 } 00:17:51.256 ] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.256 BaseBdev3 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:51.256 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.257 [ 00:17:51.257 { 00:17:51.257 "name": "BaseBdev3", 00:17:51.257 "aliases": [ 00:17:51.257 "7c8850ad-8667-46c5-9d80-72921f86da0b" 00:17:51.257 ], 00:17:51.257 "product_name": "Malloc disk", 00:17:51.257 "block_size": 512, 00:17:51.257 "num_blocks": 65536, 00:17:51.257 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:51.257 "assigned_rate_limits": { 00:17:51.257 "rw_ios_per_sec": 0, 00:17:51.257 "rw_mbytes_per_sec": 0, 00:17:51.257 "r_mbytes_per_sec": 0, 00:17:51.257 "w_mbytes_per_sec": 0 00:17:51.257 }, 00:17:51.257 "claimed": false, 00:17:51.257 "zoned": false, 00:17:51.257 "supported_io_types": { 00:17:51.257 "read": true, 00:17:51.257 "write": true, 00:17:51.257 "unmap": true, 00:17:51.257 "flush": true, 00:17:51.257 "reset": true, 00:17:51.257 "nvme_admin": false, 00:17:51.257 "nvme_io": false, 00:17:51.257 "nvme_io_md": false, 00:17:51.257 "write_zeroes": true, 00:17:51.257 "zcopy": true, 00:17:51.257 "get_zone_info": false, 00:17:51.257 "zone_management": false, 00:17:51.257 "zone_append": false, 00:17:51.257 "compare": false, 00:17:51.257 "compare_and_write": false, 00:17:51.257 "abort": true, 00:17:51.257 "seek_hole": false, 00:17:51.257 "seek_data": false, 00:17:51.257 "copy": true, 00:17:51.257 "nvme_iov_md": false 00:17:51.257 }, 00:17:51.257 "memory_domains": [ 00:17:51.257 { 00:17:51.257 "dma_device_id": "system", 00:17:51.257 "dma_device_type": 1 00:17:51.257 }, 00:17:51.257 { 00:17:51.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.257 "dma_device_type": 2 00:17:51.257 } 00:17:51.257 ], 00:17:51.257 "driver_specific": {} 00:17:51.257 } 00:17:51.257 ] 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.257 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.517 BaseBdev4 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.517 [ 00:17:51.517 { 00:17:51.517 "name": "BaseBdev4", 00:17:51.517 "aliases": [ 00:17:51.517 "bc487d95-763b-47c0-a129-652f26b0282b" 00:17:51.517 ], 00:17:51.517 "product_name": "Malloc disk", 00:17:51.517 "block_size": 512, 00:17:51.517 "num_blocks": 65536, 00:17:51.517 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:51.517 "assigned_rate_limits": { 00:17:51.517 "rw_ios_per_sec": 0, 00:17:51.517 "rw_mbytes_per_sec": 0, 00:17:51.517 "r_mbytes_per_sec": 0, 00:17:51.517 "w_mbytes_per_sec": 0 00:17:51.517 }, 00:17:51.517 "claimed": false, 00:17:51.517 "zoned": false, 00:17:51.517 "supported_io_types": { 00:17:51.517 "read": true, 00:17:51.517 "write": true, 00:17:51.517 "unmap": true, 00:17:51.517 "flush": true, 00:17:51.517 "reset": true, 00:17:51.517 "nvme_admin": false, 00:17:51.517 "nvme_io": false, 00:17:51.517 "nvme_io_md": false, 00:17:51.517 "write_zeroes": true, 00:17:51.517 "zcopy": true, 00:17:51.517 "get_zone_info": false, 00:17:51.517 "zone_management": false, 00:17:51.517 "zone_append": false, 00:17:51.517 "compare": false, 00:17:51.517 "compare_and_write": false, 00:17:51.517 "abort": true, 00:17:51.517 "seek_hole": false, 00:17:51.517 "seek_data": false, 00:17:51.517 "copy": true, 00:17:51.517 "nvme_iov_md": false 00:17:51.517 }, 00:17:51.517 "memory_domains": [ 00:17:51.517 { 00:17:51.517 "dma_device_id": "system", 00:17:51.517 "dma_device_type": 1 00:17:51.517 }, 00:17:51.517 { 00:17:51.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.517 "dma_device_type": 2 00:17:51.517 } 00:17:51.517 ], 00:17:51.517 "driver_specific": {} 00:17:51.517 } 00:17:51.517 ] 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.517 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.517 [2024-11-27 09:54:52.442491] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:51.518 [2024-11-27 09:54:52.442577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:51.518 [2024-11-27 09:54:52.442615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.518 [2024-11-27 09:54:52.445073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:51.518 [2024-11-27 09:54:52.445146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.518 "name": "Existed_Raid", 00:17:51.518 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:51.518 "strip_size_kb": 64, 00:17:51.518 "state": "configuring", 00:17:51.518 "raid_level": "raid5f", 00:17:51.518 "superblock": true, 00:17:51.518 "num_base_bdevs": 4, 00:17:51.518 "num_base_bdevs_discovered": 3, 00:17:51.518 "num_base_bdevs_operational": 4, 00:17:51.518 "base_bdevs_list": [ 00:17:51.518 { 00:17:51.518 "name": "BaseBdev1", 00:17:51.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.518 "is_configured": false, 00:17:51.518 "data_offset": 0, 00:17:51.518 "data_size": 0 00:17:51.518 }, 00:17:51.518 { 00:17:51.518 "name": "BaseBdev2", 00:17:51.518 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:51.518 "is_configured": true, 00:17:51.518 "data_offset": 2048, 00:17:51.518 "data_size": 63488 00:17:51.518 }, 00:17:51.518 { 00:17:51.518 "name": "BaseBdev3", 00:17:51.518 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:51.518 "is_configured": true, 00:17:51.518 "data_offset": 2048, 00:17:51.518 "data_size": 63488 00:17:51.518 }, 00:17:51.518 { 00:17:51.518 "name": "BaseBdev4", 00:17:51.518 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:51.518 "is_configured": true, 00:17:51.518 "data_offset": 2048, 00:17:51.518 "data_size": 63488 00:17:51.518 } 00:17:51.518 ] 00:17:51.518 }' 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.518 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.100 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.101 [2024-11-27 09:54:52.957618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.101 09:54:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.101 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.101 "name": "Existed_Raid", 00:17:52.101 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:52.101 "strip_size_kb": 64, 00:17:52.101 "state": "configuring", 00:17:52.101 "raid_level": "raid5f", 00:17:52.101 "superblock": true, 00:17:52.101 "num_base_bdevs": 4, 00:17:52.101 "num_base_bdevs_discovered": 2, 00:17:52.101 "num_base_bdevs_operational": 4, 00:17:52.101 "base_bdevs_list": [ 00:17:52.101 { 00:17:52.101 "name": "BaseBdev1", 00:17:52.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.101 "is_configured": false, 00:17:52.101 "data_offset": 0, 00:17:52.101 "data_size": 0 00:17:52.101 }, 00:17:52.101 { 00:17:52.101 "name": null, 00:17:52.101 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:52.101 "is_configured": false, 00:17:52.101 "data_offset": 0, 00:17:52.101 "data_size": 63488 00:17:52.101 }, 00:17:52.101 { 00:17:52.101 "name": "BaseBdev3", 00:17:52.101 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:52.101 "is_configured": true, 00:17:52.101 "data_offset": 2048, 00:17:52.101 "data_size": 63488 00:17:52.101 }, 00:17:52.101 { 00:17:52.101 "name": "BaseBdev4", 00:17:52.101 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:52.101 "is_configured": true, 00:17:52.101 "data_offset": 2048, 00:17:52.101 "data_size": 63488 00:17:52.101 } 00:17:52.101 ] 00:17:52.101 }' 00:17:52.101 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.101 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.360 [2024-11-27 09:54:53.483833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.360 BaseBdev1 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:52.360 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:52.361 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:52.361 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.361 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.620 [ 00:17:52.620 { 00:17:52.620 "name": "BaseBdev1", 00:17:52.620 "aliases": [ 00:17:52.620 "d52025a9-8869-4e46-aade-d1783f569d24" 00:17:52.620 ], 00:17:52.620 "product_name": "Malloc disk", 00:17:52.620 "block_size": 512, 00:17:52.620 "num_blocks": 65536, 00:17:52.620 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:52.620 "assigned_rate_limits": { 00:17:52.620 "rw_ios_per_sec": 0, 00:17:52.620 "rw_mbytes_per_sec": 0, 00:17:52.620 "r_mbytes_per_sec": 0, 00:17:52.620 "w_mbytes_per_sec": 0 00:17:52.620 }, 00:17:52.620 "claimed": true, 00:17:52.620 "claim_type": "exclusive_write", 00:17:52.620 "zoned": false, 00:17:52.620 "supported_io_types": { 00:17:52.620 "read": true, 00:17:52.620 "write": true, 00:17:52.620 "unmap": true, 00:17:52.620 "flush": true, 00:17:52.620 "reset": true, 00:17:52.620 "nvme_admin": false, 00:17:52.620 "nvme_io": false, 00:17:52.620 "nvme_io_md": false, 00:17:52.620 "write_zeroes": true, 00:17:52.620 "zcopy": true, 00:17:52.620 "get_zone_info": false, 00:17:52.620 "zone_management": false, 00:17:52.620 "zone_append": false, 00:17:52.620 "compare": false, 00:17:52.620 "compare_and_write": false, 00:17:52.620 "abort": true, 00:17:52.620 "seek_hole": false, 00:17:52.620 "seek_data": false, 00:17:52.620 "copy": true, 00:17:52.620 "nvme_iov_md": false 00:17:52.620 }, 00:17:52.620 "memory_domains": [ 00:17:52.620 { 00:17:52.620 "dma_device_id": "system", 00:17:52.620 "dma_device_type": 1 00:17:52.620 }, 00:17:52.620 { 00:17:52.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.620 "dma_device_type": 2 00:17:52.620 } 00:17:52.620 ], 00:17:52.620 "driver_specific": {} 00:17:52.620 } 00:17:52.620 ] 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:52.620 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.621 "name": "Existed_Raid", 00:17:52.621 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:52.621 "strip_size_kb": 64, 00:17:52.621 "state": "configuring", 00:17:52.621 "raid_level": "raid5f", 00:17:52.621 "superblock": true, 00:17:52.621 "num_base_bdevs": 4, 00:17:52.621 "num_base_bdevs_discovered": 3, 00:17:52.621 "num_base_bdevs_operational": 4, 00:17:52.621 "base_bdevs_list": [ 00:17:52.621 { 00:17:52.621 "name": "BaseBdev1", 00:17:52.621 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:52.621 "is_configured": true, 00:17:52.621 "data_offset": 2048, 00:17:52.621 "data_size": 63488 00:17:52.621 }, 00:17:52.621 { 00:17:52.621 "name": null, 00:17:52.621 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:52.621 "is_configured": false, 00:17:52.621 "data_offset": 0, 00:17:52.621 "data_size": 63488 00:17:52.621 }, 00:17:52.621 { 00:17:52.621 "name": "BaseBdev3", 00:17:52.621 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:52.621 "is_configured": true, 00:17:52.621 "data_offset": 2048, 00:17:52.621 "data_size": 63488 00:17:52.621 }, 00:17:52.621 { 00:17:52.621 "name": "BaseBdev4", 00:17:52.621 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:52.621 "is_configured": true, 00:17:52.621 "data_offset": 2048, 00:17:52.621 "data_size": 63488 00:17:52.621 } 00:17:52.621 ] 00:17:52.621 }' 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.621 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.879 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:52.879 09:54:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.879 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.879 09:54:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.879 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.137 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.138 [2024-11-27 09:54:54.039081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.138 "name": "Existed_Raid", 00:17:53.138 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:53.138 "strip_size_kb": 64, 00:17:53.138 "state": "configuring", 00:17:53.138 "raid_level": "raid5f", 00:17:53.138 "superblock": true, 00:17:53.138 "num_base_bdevs": 4, 00:17:53.138 "num_base_bdevs_discovered": 2, 00:17:53.138 "num_base_bdevs_operational": 4, 00:17:53.138 "base_bdevs_list": [ 00:17:53.138 { 00:17:53.138 "name": "BaseBdev1", 00:17:53.138 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:53.138 "is_configured": true, 00:17:53.138 "data_offset": 2048, 00:17:53.138 "data_size": 63488 00:17:53.138 }, 00:17:53.138 { 00:17:53.138 "name": null, 00:17:53.138 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:53.138 "is_configured": false, 00:17:53.138 "data_offset": 0, 00:17:53.138 "data_size": 63488 00:17:53.138 }, 00:17:53.138 { 00:17:53.138 "name": null, 00:17:53.138 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:53.138 "is_configured": false, 00:17:53.138 "data_offset": 0, 00:17:53.138 "data_size": 63488 00:17:53.138 }, 00:17:53.138 { 00:17:53.138 "name": "BaseBdev4", 00:17:53.138 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:53.138 "is_configured": true, 00:17:53.138 "data_offset": 2048, 00:17:53.138 "data_size": 63488 00:17:53.138 } 00:17:53.138 ] 00:17:53.138 }' 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.138 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.397 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.397 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:53.397 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.397 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.397 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.656 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.657 [2024-11-27 09:54:54.546232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.657 "name": "Existed_Raid", 00:17:53.657 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:53.657 "strip_size_kb": 64, 00:17:53.657 "state": "configuring", 00:17:53.657 "raid_level": "raid5f", 00:17:53.657 "superblock": true, 00:17:53.657 "num_base_bdevs": 4, 00:17:53.657 "num_base_bdevs_discovered": 3, 00:17:53.657 "num_base_bdevs_operational": 4, 00:17:53.657 "base_bdevs_list": [ 00:17:53.657 { 00:17:53.657 "name": "BaseBdev1", 00:17:53.657 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:53.657 "is_configured": true, 00:17:53.657 "data_offset": 2048, 00:17:53.657 "data_size": 63488 00:17:53.657 }, 00:17:53.657 { 00:17:53.657 "name": null, 00:17:53.657 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:53.657 "is_configured": false, 00:17:53.657 "data_offset": 0, 00:17:53.657 "data_size": 63488 00:17:53.657 }, 00:17:53.657 { 00:17:53.657 "name": "BaseBdev3", 00:17:53.657 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:53.657 "is_configured": true, 00:17:53.657 "data_offset": 2048, 00:17:53.657 "data_size": 63488 00:17:53.657 }, 00:17:53.657 { 00:17:53.657 "name": "BaseBdev4", 00:17:53.657 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:53.657 "is_configured": true, 00:17:53.657 "data_offset": 2048, 00:17:53.657 "data_size": 63488 00:17:53.657 } 00:17:53.657 ] 00:17:53.657 }' 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.657 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.916 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.916 09:54:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:53.916 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.916 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.916 09:54:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.916 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:53.916 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:53.916 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.916 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.916 [2024-11-27 09:54:55.013504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.176 "name": "Existed_Raid", 00:17:54.176 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:54.176 "strip_size_kb": 64, 00:17:54.176 "state": "configuring", 00:17:54.176 "raid_level": "raid5f", 00:17:54.176 "superblock": true, 00:17:54.176 "num_base_bdevs": 4, 00:17:54.176 "num_base_bdevs_discovered": 2, 00:17:54.176 "num_base_bdevs_operational": 4, 00:17:54.176 "base_bdevs_list": [ 00:17:54.176 { 00:17:54.176 "name": null, 00:17:54.176 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:54.176 "is_configured": false, 00:17:54.176 "data_offset": 0, 00:17:54.176 "data_size": 63488 00:17:54.176 }, 00:17:54.176 { 00:17:54.176 "name": null, 00:17:54.176 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:54.176 "is_configured": false, 00:17:54.176 "data_offset": 0, 00:17:54.176 "data_size": 63488 00:17:54.176 }, 00:17:54.176 { 00:17:54.176 "name": "BaseBdev3", 00:17:54.176 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:54.176 "is_configured": true, 00:17:54.176 "data_offset": 2048, 00:17:54.176 "data_size": 63488 00:17:54.176 }, 00:17:54.176 { 00:17:54.176 "name": "BaseBdev4", 00:17:54.176 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:54.176 "is_configured": true, 00:17:54.176 "data_offset": 2048, 00:17:54.176 "data_size": 63488 00:17:54.176 } 00:17:54.176 ] 00:17:54.176 }' 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.176 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.435 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.435 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.435 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.435 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:54.435 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.695 [2024-11-27 09:54:55.600601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.695 "name": "Existed_Raid", 00:17:54.695 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:54.695 "strip_size_kb": 64, 00:17:54.695 "state": "configuring", 00:17:54.695 "raid_level": "raid5f", 00:17:54.695 "superblock": true, 00:17:54.695 "num_base_bdevs": 4, 00:17:54.695 "num_base_bdevs_discovered": 3, 00:17:54.695 "num_base_bdevs_operational": 4, 00:17:54.695 "base_bdevs_list": [ 00:17:54.695 { 00:17:54.695 "name": null, 00:17:54.695 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:54.695 "is_configured": false, 00:17:54.695 "data_offset": 0, 00:17:54.695 "data_size": 63488 00:17:54.695 }, 00:17:54.695 { 00:17:54.695 "name": "BaseBdev2", 00:17:54.695 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:54.695 "is_configured": true, 00:17:54.695 "data_offset": 2048, 00:17:54.695 "data_size": 63488 00:17:54.695 }, 00:17:54.695 { 00:17:54.695 "name": "BaseBdev3", 00:17:54.695 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:54.695 "is_configured": true, 00:17:54.695 "data_offset": 2048, 00:17:54.695 "data_size": 63488 00:17:54.695 }, 00:17:54.695 { 00:17:54.695 "name": "BaseBdev4", 00:17:54.695 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:54.695 "is_configured": true, 00:17:54.695 "data_offset": 2048, 00:17:54.695 "data_size": 63488 00:17:54.695 } 00:17:54.695 ] 00:17:54.695 }' 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.695 09:54:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.956 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.956 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.956 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.956 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.956 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.215 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d52025a9-8869-4e46-aade-d1783f569d24 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 [2024-11-27 09:54:56.206638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:55.216 [2024-11-27 09:54:56.207175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:55.216 [2024-11-27 09:54:56.207198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.216 [2024-11-27 09:54:56.207538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:55.216 NewBaseBdev 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 [2024-11-27 09:54:56.214963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:55.216 [2024-11-27 09:54:56.215070] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:55.216 [2024-11-27 09:54:56.215372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 [ 00:17:55.216 { 00:17:55.216 "name": "NewBaseBdev", 00:17:55.216 "aliases": [ 00:17:55.216 "d52025a9-8869-4e46-aade-d1783f569d24" 00:17:55.216 ], 00:17:55.216 "product_name": "Malloc disk", 00:17:55.216 "block_size": 512, 00:17:55.216 "num_blocks": 65536, 00:17:55.216 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:55.216 "assigned_rate_limits": { 00:17:55.216 "rw_ios_per_sec": 0, 00:17:55.216 "rw_mbytes_per_sec": 0, 00:17:55.216 "r_mbytes_per_sec": 0, 00:17:55.216 "w_mbytes_per_sec": 0 00:17:55.216 }, 00:17:55.216 "claimed": true, 00:17:55.216 "claim_type": "exclusive_write", 00:17:55.216 "zoned": false, 00:17:55.216 "supported_io_types": { 00:17:55.216 "read": true, 00:17:55.216 "write": true, 00:17:55.216 "unmap": true, 00:17:55.216 "flush": true, 00:17:55.216 "reset": true, 00:17:55.216 "nvme_admin": false, 00:17:55.216 "nvme_io": false, 00:17:55.216 "nvme_io_md": false, 00:17:55.216 "write_zeroes": true, 00:17:55.216 "zcopy": true, 00:17:55.216 "get_zone_info": false, 00:17:55.216 "zone_management": false, 00:17:55.216 "zone_append": false, 00:17:55.216 "compare": false, 00:17:55.216 "compare_and_write": false, 00:17:55.216 "abort": true, 00:17:55.216 "seek_hole": false, 00:17:55.216 "seek_data": false, 00:17:55.216 "copy": true, 00:17:55.216 "nvme_iov_md": false 00:17:55.216 }, 00:17:55.216 "memory_domains": [ 00:17:55.216 { 00:17:55.216 "dma_device_id": "system", 00:17:55.216 "dma_device_type": 1 00:17:55.216 }, 00:17:55.216 { 00:17:55.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.216 "dma_device_type": 2 00:17:55.216 } 00:17:55.216 ], 00:17:55.216 "driver_specific": {} 00:17:55.216 } 00:17:55.216 ] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.216 "name": "Existed_Raid", 00:17:55.216 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:55.216 "strip_size_kb": 64, 00:17:55.216 "state": "online", 00:17:55.216 "raid_level": "raid5f", 00:17:55.216 "superblock": true, 00:17:55.216 "num_base_bdevs": 4, 00:17:55.216 "num_base_bdevs_discovered": 4, 00:17:55.216 "num_base_bdevs_operational": 4, 00:17:55.216 "base_bdevs_list": [ 00:17:55.216 { 00:17:55.216 "name": "NewBaseBdev", 00:17:55.216 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:55.216 "is_configured": true, 00:17:55.216 "data_offset": 2048, 00:17:55.216 "data_size": 63488 00:17:55.216 }, 00:17:55.216 { 00:17:55.216 "name": "BaseBdev2", 00:17:55.216 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:55.216 "is_configured": true, 00:17:55.216 "data_offset": 2048, 00:17:55.216 "data_size": 63488 00:17:55.216 }, 00:17:55.216 { 00:17:55.216 "name": "BaseBdev3", 00:17:55.216 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:55.216 "is_configured": true, 00:17:55.216 "data_offset": 2048, 00:17:55.216 "data_size": 63488 00:17:55.216 }, 00:17:55.216 { 00:17:55.216 "name": "BaseBdev4", 00:17:55.216 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:55.216 "is_configured": true, 00:17:55.216 "data_offset": 2048, 00:17:55.216 "data_size": 63488 00:17:55.216 } 00:17:55.216 ] 00:17:55.216 }' 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.216 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.786 [2024-11-27 09:54:56.692901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.786 "name": "Existed_Raid", 00:17:55.786 "aliases": [ 00:17:55.786 "1b403df5-fbcd-4791-9861-26fa478aeb0f" 00:17:55.786 ], 00:17:55.786 "product_name": "Raid Volume", 00:17:55.786 "block_size": 512, 00:17:55.786 "num_blocks": 190464, 00:17:55.786 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:55.786 "assigned_rate_limits": { 00:17:55.786 "rw_ios_per_sec": 0, 00:17:55.786 "rw_mbytes_per_sec": 0, 00:17:55.786 "r_mbytes_per_sec": 0, 00:17:55.786 "w_mbytes_per_sec": 0 00:17:55.786 }, 00:17:55.786 "claimed": false, 00:17:55.786 "zoned": false, 00:17:55.786 "supported_io_types": { 00:17:55.786 "read": true, 00:17:55.786 "write": true, 00:17:55.786 "unmap": false, 00:17:55.786 "flush": false, 00:17:55.786 "reset": true, 00:17:55.786 "nvme_admin": false, 00:17:55.786 "nvme_io": false, 00:17:55.786 "nvme_io_md": false, 00:17:55.786 "write_zeroes": true, 00:17:55.786 "zcopy": false, 00:17:55.786 "get_zone_info": false, 00:17:55.786 "zone_management": false, 00:17:55.786 "zone_append": false, 00:17:55.786 "compare": false, 00:17:55.786 "compare_and_write": false, 00:17:55.786 "abort": false, 00:17:55.786 "seek_hole": false, 00:17:55.786 "seek_data": false, 00:17:55.786 "copy": false, 00:17:55.786 "nvme_iov_md": false 00:17:55.786 }, 00:17:55.786 "driver_specific": { 00:17:55.786 "raid": { 00:17:55.786 "uuid": "1b403df5-fbcd-4791-9861-26fa478aeb0f", 00:17:55.786 "strip_size_kb": 64, 00:17:55.786 "state": "online", 00:17:55.786 "raid_level": "raid5f", 00:17:55.786 "superblock": true, 00:17:55.786 "num_base_bdevs": 4, 00:17:55.786 "num_base_bdevs_discovered": 4, 00:17:55.786 "num_base_bdevs_operational": 4, 00:17:55.786 "base_bdevs_list": [ 00:17:55.786 { 00:17:55.786 "name": "NewBaseBdev", 00:17:55.786 "uuid": "d52025a9-8869-4e46-aade-d1783f569d24", 00:17:55.786 "is_configured": true, 00:17:55.786 "data_offset": 2048, 00:17:55.786 "data_size": 63488 00:17:55.786 }, 00:17:55.786 { 00:17:55.786 "name": "BaseBdev2", 00:17:55.786 "uuid": "b8eb2eba-4d4e-4e9f-ac79-7dfcbedcf6b7", 00:17:55.786 "is_configured": true, 00:17:55.786 "data_offset": 2048, 00:17:55.786 "data_size": 63488 00:17:55.786 }, 00:17:55.786 { 00:17:55.786 "name": "BaseBdev3", 00:17:55.786 "uuid": "7c8850ad-8667-46c5-9d80-72921f86da0b", 00:17:55.786 "is_configured": true, 00:17:55.786 "data_offset": 2048, 00:17:55.786 "data_size": 63488 00:17:55.786 }, 00:17:55.786 { 00:17:55.786 "name": "BaseBdev4", 00:17:55.786 "uuid": "bc487d95-763b-47c0-a129-652f26b0282b", 00:17:55.786 "is_configured": true, 00:17:55.786 "data_offset": 2048, 00:17:55.786 "data_size": 63488 00:17:55.786 } 00:17:55.786 ] 00:17:55.786 } 00:17:55.786 } 00:17:55.786 }' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:55.786 BaseBdev2 00:17:55.786 BaseBdev3 00:17:55.786 BaseBdev4' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.786 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.047 09:54:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.047 09:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:56.047 09:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:56.047 09:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:56.047 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.047 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.047 [2024-11-27 09:54:57.016117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.047 [2024-11-27 09:54:57.016170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.047 [2024-11-27 09:54:57.016289] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.047 [2024-11-27 09:54:57.016645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.048 [2024-11-27 09:54:57.016661] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83759 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83759 ']' 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83759 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83759 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.048 killing process with pid 83759 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83759' 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83759 00:17:56.048 [2024-11-27 09:54:57.066167] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.048 09:54:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83759 00:17:56.617 [2024-11-27 09:54:57.501656] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.996 09:54:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:57.996 00:17:57.996 real 0m12.093s 00:17:57.996 user 0m18.726s 00:17:57.996 sys 0m2.531s 00:17:57.996 09:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.996 ************************************ 00:17:57.996 END TEST raid5f_state_function_test_sb 00:17:57.996 ************************************ 00:17:57.996 09:54:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.996 09:54:58 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:57.996 09:54:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:57.997 09:54:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:57.997 09:54:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.997 ************************************ 00:17:57.997 START TEST raid5f_superblock_test 00:17:57.997 ************************************ 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84435 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84435 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84435 ']' 00:17:57.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.997 09:54:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.997 [2024-11-27 09:54:58.927852] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:17:57.997 [2024-11-27 09:54:58.928130] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84435 ] 00:17:57.997 [2024-11-27 09:54:59.107936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.255 [2024-11-27 09:54:59.251127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.513 [2024-11-27 09:54:59.494309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.513 [2024-11-27 09:54:59.494519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.776 malloc1 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.776 [2024-11-27 09:54:59.835346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.776 [2024-11-27 09:54:59.835438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.776 [2024-11-27 09:54:59.835470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:58.776 [2024-11-27 09:54:59.835482] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.776 [2024-11-27 09:54:59.838202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.776 [2024-11-27 09:54:59.838251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.776 pt1 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.776 malloc2 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.776 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.776 [2024-11-27 09:54:59.901886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.776 [2024-11-27 09:54:59.902099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.776 [2024-11-27 09:54:59.902164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:58.776 [2024-11-27 09:54:59.902205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.776 [2024-11-27 09:54:59.904905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.776 [2024-11-27 09:54:59.905031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.776 pt2 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.035 malloc3 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.035 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.035 [2024-11-27 09:54:59.980803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:59.035 [2024-11-27 09:54:59.980986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.035 [2024-11-27 09:54:59.981056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:59.035 [2024-11-27 09:54:59.981136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.035 [2024-11-27 09:54:59.983791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.036 [2024-11-27 09:54:59.983892] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:59.036 pt3 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.036 09:54:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.036 malloc4 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.036 [2024-11-27 09:55:00.049661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:59.036 [2024-11-27 09:55:00.049860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.036 [2024-11-27 09:55:00.049912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:59.036 [2024-11-27 09:55:00.049954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.036 [2024-11-27 09:55:00.052662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.036 [2024-11-27 09:55:00.052777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:59.036 pt4 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.036 [2024-11-27 09:55:00.061712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.036 [2024-11-27 09:55:00.064033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.036 [2024-11-27 09:55:00.064143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:59.036 [2024-11-27 09:55:00.064198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:59.036 [2024-11-27 09:55:00.064434] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:59.036 [2024-11-27 09:55:00.064453] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:59.036 [2024-11-27 09:55:00.064780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:59.036 [2024-11-27 09:55:00.072678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:59.036 [2024-11-27 09:55:00.072712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:59.036 [2024-11-27 09:55:00.073031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.036 "name": "raid_bdev1", 00:17:59.036 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:17:59.036 "strip_size_kb": 64, 00:17:59.036 "state": "online", 00:17:59.036 "raid_level": "raid5f", 00:17:59.036 "superblock": true, 00:17:59.036 "num_base_bdevs": 4, 00:17:59.036 "num_base_bdevs_discovered": 4, 00:17:59.036 "num_base_bdevs_operational": 4, 00:17:59.036 "base_bdevs_list": [ 00:17:59.036 { 00:17:59.036 "name": "pt1", 00:17:59.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.036 "is_configured": true, 00:17:59.036 "data_offset": 2048, 00:17:59.036 "data_size": 63488 00:17:59.036 }, 00:17:59.036 { 00:17:59.036 "name": "pt2", 00:17:59.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.036 "is_configured": true, 00:17:59.036 "data_offset": 2048, 00:17:59.036 "data_size": 63488 00:17:59.036 }, 00:17:59.036 { 00:17:59.036 "name": "pt3", 00:17:59.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.036 "is_configured": true, 00:17:59.036 "data_offset": 2048, 00:17:59.036 "data_size": 63488 00:17:59.036 }, 00:17:59.036 { 00:17:59.036 "name": "pt4", 00:17:59.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.036 "is_configured": true, 00:17:59.036 "data_offset": 2048, 00:17:59.036 "data_size": 63488 00:17:59.036 } 00:17:59.036 ] 00:17:59.036 }' 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.036 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.606 [2024-11-27 09:55:00.582329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.606 "name": "raid_bdev1", 00:17:59.606 "aliases": [ 00:17:59.606 "f54e0751-4120-4de4-86ee-6785335ccc01" 00:17:59.606 ], 00:17:59.606 "product_name": "Raid Volume", 00:17:59.606 "block_size": 512, 00:17:59.606 "num_blocks": 190464, 00:17:59.606 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:17:59.606 "assigned_rate_limits": { 00:17:59.606 "rw_ios_per_sec": 0, 00:17:59.606 "rw_mbytes_per_sec": 0, 00:17:59.606 "r_mbytes_per_sec": 0, 00:17:59.606 "w_mbytes_per_sec": 0 00:17:59.606 }, 00:17:59.606 "claimed": false, 00:17:59.606 "zoned": false, 00:17:59.606 "supported_io_types": { 00:17:59.606 "read": true, 00:17:59.606 "write": true, 00:17:59.606 "unmap": false, 00:17:59.606 "flush": false, 00:17:59.606 "reset": true, 00:17:59.606 "nvme_admin": false, 00:17:59.606 "nvme_io": false, 00:17:59.606 "nvme_io_md": false, 00:17:59.606 "write_zeroes": true, 00:17:59.606 "zcopy": false, 00:17:59.606 "get_zone_info": false, 00:17:59.606 "zone_management": false, 00:17:59.606 "zone_append": false, 00:17:59.606 "compare": false, 00:17:59.606 "compare_and_write": false, 00:17:59.606 "abort": false, 00:17:59.606 "seek_hole": false, 00:17:59.606 "seek_data": false, 00:17:59.606 "copy": false, 00:17:59.606 "nvme_iov_md": false 00:17:59.606 }, 00:17:59.606 "driver_specific": { 00:17:59.606 "raid": { 00:17:59.606 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:17:59.606 "strip_size_kb": 64, 00:17:59.606 "state": "online", 00:17:59.606 "raid_level": "raid5f", 00:17:59.606 "superblock": true, 00:17:59.606 "num_base_bdevs": 4, 00:17:59.606 "num_base_bdevs_discovered": 4, 00:17:59.606 "num_base_bdevs_operational": 4, 00:17:59.606 "base_bdevs_list": [ 00:17:59.606 { 00:17:59.606 "name": "pt1", 00:17:59.606 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.606 "is_configured": true, 00:17:59.606 "data_offset": 2048, 00:17:59.606 "data_size": 63488 00:17:59.606 }, 00:17:59.606 { 00:17:59.606 "name": "pt2", 00:17:59.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.606 "is_configured": true, 00:17:59.606 "data_offset": 2048, 00:17:59.606 "data_size": 63488 00:17:59.606 }, 00:17:59.606 { 00:17:59.606 "name": "pt3", 00:17:59.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:59.606 "is_configured": true, 00:17:59.606 "data_offset": 2048, 00:17:59.606 "data_size": 63488 00:17:59.606 }, 00:17:59.606 { 00:17:59.606 "name": "pt4", 00:17:59.606 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:59.606 "is_configured": true, 00:17:59.606 "data_offset": 2048, 00:17:59.606 "data_size": 63488 00:17:59.606 } 00:17:59.606 ] 00:17:59.606 } 00:17:59.606 } 00:17:59.606 }' 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:59.606 pt2 00:17:59.606 pt3 00:17:59.606 pt4' 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:59.606 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.607 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.607 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.607 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 [2024-11-27 09:55:00.913733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f54e0751-4120-4de4-86ee-6785335ccc01 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f54e0751-4120-4de4-86ee-6785335ccc01 ']' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 [2024-11-27 09:55:00.961434] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:59.866 [2024-11-27 09:55:00.961482] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:59.866 [2024-11-27 09:55:00.961614] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:59.866 [2024-11-27 09:55:00.961721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:59.866 [2024-11-27 09:55:00.961741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:59.866 09:55:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.126 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.127 [2024-11-27 09:55:01.129272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:00.127 [2024-11-27 09:55:01.131735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:00.127 [2024-11-27 09:55:01.131801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:00.127 [2024-11-27 09:55:01.131841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:00.127 [2024-11-27 09:55:01.131910] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:00.127 [2024-11-27 09:55:01.131984] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:00.127 [2024-11-27 09:55:01.132019] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:00.127 [2024-11-27 09:55:01.132043] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:00.127 [2024-11-27 09:55:01.132061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:00.127 [2024-11-27 09:55:01.132075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:00.127 request: 00:18:00.127 { 00:18:00.127 "name": "raid_bdev1", 00:18:00.127 "raid_level": "raid5f", 00:18:00.127 "base_bdevs": [ 00:18:00.127 "malloc1", 00:18:00.127 "malloc2", 00:18:00.127 "malloc3", 00:18:00.127 "malloc4" 00:18:00.127 ], 00:18:00.127 "strip_size_kb": 64, 00:18:00.127 "superblock": false, 00:18:00.127 "method": "bdev_raid_create", 00:18:00.127 "req_id": 1 00:18:00.127 } 00:18:00.127 Got JSON-RPC error response 00:18:00.127 response: 00:18:00.127 { 00:18:00.127 "code": -17, 00:18:00.127 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:00.127 } 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.127 [2024-11-27 09:55:01.197136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:00.127 [2024-11-27 09:55:01.197383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.127 [2024-11-27 09:55:01.197432] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:00.127 [2024-11-27 09:55:01.197475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.127 [2024-11-27 09:55:01.200312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.127 [2024-11-27 09:55:01.200429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:00.127 [2024-11-27 09:55:01.200616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:00.127 [2024-11-27 09:55:01.200731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:00.127 pt1 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.127 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.405 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.405 "name": "raid_bdev1", 00:18:00.405 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:00.405 "strip_size_kb": 64, 00:18:00.405 "state": "configuring", 00:18:00.405 "raid_level": "raid5f", 00:18:00.405 "superblock": true, 00:18:00.405 "num_base_bdevs": 4, 00:18:00.405 "num_base_bdevs_discovered": 1, 00:18:00.405 "num_base_bdevs_operational": 4, 00:18:00.405 "base_bdevs_list": [ 00:18:00.405 { 00:18:00.405 "name": "pt1", 00:18:00.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.405 "is_configured": true, 00:18:00.405 "data_offset": 2048, 00:18:00.405 "data_size": 63488 00:18:00.405 }, 00:18:00.405 { 00:18:00.405 "name": null, 00:18:00.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.405 "is_configured": false, 00:18:00.405 "data_offset": 2048, 00:18:00.405 "data_size": 63488 00:18:00.405 }, 00:18:00.405 { 00:18:00.405 "name": null, 00:18:00.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.405 "is_configured": false, 00:18:00.405 "data_offset": 2048, 00:18:00.405 "data_size": 63488 00:18:00.405 }, 00:18:00.405 { 00:18:00.405 "name": null, 00:18:00.405 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.405 "is_configured": false, 00:18:00.405 "data_offset": 2048, 00:18:00.405 "data_size": 63488 00:18:00.405 } 00:18:00.405 ] 00:18:00.405 }' 00:18:00.405 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.405 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.680 [2024-11-27 09:55:01.672316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.680 [2024-11-27 09:55:01.672557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.680 [2024-11-27 09:55:01.672590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:00.680 [2024-11-27 09:55:01.672606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.680 [2024-11-27 09:55:01.673239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.680 [2024-11-27 09:55:01.673282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.680 [2024-11-27 09:55:01.673403] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:00.680 [2024-11-27 09:55:01.673437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.680 pt2 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.680 [2024-11-27 09:55:01.684328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.680 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.680 "name": "raid_bdev1", 00:18:00.680 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:00.680 "strip_size_kb": 64, 00:18:00.680 "state": "configuring", 00:18:00.680 "raid_level": "raid5f", 00:18:00.680 "superblock": true, 00:18:00.680 "num_base_bdevs": 4, 00:18:00.680 "num_base_bdevs_discovered": 1, 00:18:00.680 "num_base_bdevs_operational": 4, 00:18:00.680 "base_bdevs_list": [ 00:18:00.680 { 00:18:00.680 "name": "pt1", 00:18:00.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.680 "is_configured": true, 00:18:00.680 "data_offset": 2048, 00:18:00.680 "data_size": 63488 00:18:00.680 }, 00:18:00.680 { 00:18:00.680 "name": null, 00:18:00.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.680 "is_configured": false, 00:18:00.680 "data_offset": 0, 00:18:00.681 "data_size": 63488 00:18:00.681 }, 00:18:00.681 { 00:18:00.681 "name": null, 00:18:00.681 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:00.681 "is_configured": false, 00:18:00.681 "data_offset": 2048, 00:18:00.681 "data_size": 63488 00:18:00.681 }, 00:18:00.681 { 00:18:00.681 "name": null, 00:18:00.681 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:00.681 "is_configured": false, 00:18:00.681 "data_offset": 2048, 00:18:00.681 "data_size": 63488 00:18:00.681 } 00:18:00.681 ] 00:18:00.681 }' 00:18:00.681 09:55:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.681 09:55:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.250 [2024-11-27 09:55:02.183411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.250 [2024-11-27 09:55:02.183551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.250 [2024-11-27 09:55:02.183580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:01.250 [2024-11-27 09:55:02.183592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.250 [2024-11-27 09:55:02.184215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.250 [2024-11-27 09:55:02.184246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.250 [2024-11-27 09:55:02.184370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:01.250 [2024-11-27 09:55:02.184399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.250 pt2 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.250 [2024-11-27 09:55:02.195363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.250 [2024-11-27 09:55:02.195445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.250 [2024-11-27 09:55:02.195482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:01.250 [2024-11-27 09:55:02.195495] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.250 [2024-11-27 09:55:02.196052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.250 [2024-11-27 09:55:02.196091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.250 [2024-11-27 09:55:02.196210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:01.250 [2024-11-27 09:55:02.196268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.250 pt3 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.250 [2024-11-27 09:55:02.207310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:01.250 [2024-11-27 09:55:02.207459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.250 [2024-11-27 09:55:02.207491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:01.250 [2024-11-27 09:55:02.207502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.250 [2024-11-27 09:55:02.208122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.250 [2024-11-27 09:55:02.208147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:01.250 [2024-11-27 09:55:02.208259] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:01.250 [2024-11-27 09:55:02.208292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:01.250 [2024-11-27 09:55:02.208472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:01.250 [2024-11-27 09:55:02.208489] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:01.250 [2024-11-27 09:55:02.208794] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:01.250 [2024-11-27 09:55:02.216061] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:01.250 [2024-11-27 09:55:02.216096] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:01.250 [2024-11-27 09:55:02.216367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.250 pt4 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.250 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.250 "name": "raid_bdev1", 00:18:01.250 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:01.250 "strip_size_kb": 64, 00:18:01.250 "state": "online", 00:18:01.250 "raid_level": "raid5f", 00:18:01.250 "superblock": true, 00:18:01.250 "num_base_bdevs": 4, 00:18:01.250 "num_base_bdevs_discovered": 4, 00:18:01.250 "num_base_bdevs_operational": 4, 00:18:01.250 "base_bdevs_list": [ 00:18:01.250 { 00:18:01.250 "name": "pt1", 00:18:01.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.250 "is_configured": true, 00:18:01.250 "data_offset": 2048, 00:18:01.250 "data_size": 63488 00:18:01.250 }, 00:18:01.250 { 00:18:01.250 "name": "pt2", 00:18:01.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.251 "is_configured": true, 00:18:01.251 "data_offset": 2048, 00:18:01.251 "data_size": 63488 00:18:01.251 }, 00:18:01.251 { 00:18:01.251 "name": "pt3", 00:18:01.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.251 "is_configured": true, 00:18:01.251 "data_offset": 2048, 00:18:01.251 "data_size": 63488 00:18:01.251 }, 00:18:01.251 { 00:18:01.251 "name": "pt4", 00:18:01.251 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.251 "is_configured": true, 00:18:01.251 "data_offset": 2048, 00:18:01.251 "data_size": 63488 00:18:01.251 } 00:18:01.251 ] 00:18:01.251 }' 00:18:01.251 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.251 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.820 [2024-11-27 09:55:02.673947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.820 "name": "raid_bdev1", 00:18:01.820 "aliases": [ 00:18:01.820 "f54e0751-4120-4de4-86ee-6785335ccc01" 00:18:01.820 ], 00:18:01.820 "product_name": "Raid Volume", 00:18:01.820 "block_size": 512, 00:18:01.820 "num_blocks": 190464, 00:18:01.820 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:01.820 "assigned_rate_limits": { 00:18:01.820 "rw_ios_per_sec": 0, 00:18:01.820 "rw_mbytes_per_sec": 0, 00:18:01.820 "r_mbytes_per_sec": 0, 00:18:01.820 "w_mbytes_per_sec": 0 00:18:01.820 }, 00:18:01.820 "claimed": false, 00:18:01.820 "zoned": false, 00:18:01.820 "supported_io_types": { 00:18:01.820 "read": true, 00:18:01.820 "write": true, 00:18:01.820 "unmap": false, 00:18:01.820 "flush": false, 00:18:01.820 "reset": true, 00:18:01.820 "nvme_admin": false, 00:18:01.820 "nvme_io": false, 00:18:01.820 "nvme_io_md": false, 00:18:01.820 "write_zeroes": true, 00:18:01.820 "zcopy": false, 00:18:01.820 "get_zone_info": false, 00:18:01.820 "zone_management": false, 00:18:01.820 "zone_append": false, 00:18:01.820 "compare": false, 00:18:01.820 "compare_and_write": false, 00:18:01.820 "abort": false, 00:18:01.820 "seek_hole": false, 00:18:01.820 "seek_data": false, 00:18:01.820 "copy": false, 00:18:01.820 "nvme_iov_md": false 00:18:01.820 }, 00:18:01.820 "driver_specific": { 00:18:01.820 "raid": { 00:18:01.820 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:01.820 "strip_size_kb": 64, 00:18:01.820 "state": "online", 00:18:01.820 "raid_level": "raid5f", 00:18:01.820 "superblock": true, 00:18:01.820 "num_base_bdevs": 4, 00:18:01.820 "num_base_bdevs_discovered": 4, 00:18:01.820 "num_base_bdevs_operational": 4, 00:18:01.820 "base_bdevs_list": [ 00:18:01.820 { 00:18:01.820 "name": "pt1", 00:18:01.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:01.820 "is_configured": true, 00:18:01.820 "data_offset": 2048, 00:18:01.820 "data_size": 63488 00:18:01.820 }, 00:18:01.820 { 00:18:01.820 "name": "pt2", 00:18:01.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:01.820 "is_configured": true, 00:18:01.820 "data_offset": 2048, 00:18:01.820 "data_size": 63488 00:18:01.820 }, 00:18:01.820 { 00:18:01.820 "name": "pt3", 00:18:01.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:01.820 "is_configured": true, 00:18:01.820 "data_offset": 2048, 00:18:01.820 "data_size": 63488 00:18:01.820 }, 00:18:01.820 { 00:18:01.820 "name": "pt4", 00:18:01.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:01.820 "is_configured": true, 00:18:01.820 "data_offset": 2048, 00:18:01.820 "data_size": 63488 00:18:01.820 } 00:18:01.820 ] 00:18:01.820 } 00:18:01.820 } 00:18:01.820 }' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:01.820 pt2 00:18:01.820 pt3 00:18:01.820 pt4' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.820 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:02.079 09:55:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:02.080 [2024-11-27 09:55:03.017444] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f54e0751-4120-4de4-86ee-6785335ccc01 '!=' f54e0751-4120-4de4-86ee-6785335ccc01 ']' 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.080 [2024-11-27 09:55:03.065259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.080 "name": "raid_bdev1", 00:18:02.080 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:02.080 "strip_size_kb": 64, 00:18:02.080 "state": "online", 00:18:02.080 "raid_level": "raid5f", 00:18:02.080 "superblock": true, 00:18:02.080 "num_base_bdevs": 4, 00:18:02.080 "num_base_bdevs_discovered": 3, 00:18:02.080 "num_base_bdevs_operational": 3, 00:18:02.080 "base_bdevs_list": [ 00:18:02.080 { 00:18:02.080 "name": null, 00:18:02.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.080 "is_configured": false, 00:18:02.080 "data_offset": 0, 00:18:02.080 "data_size": 63488 00:18:02.080 }, 00:18:02.080 { 00:18:02.080 "name": "pt2", 00:18:02.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.080 "is_configured": true, 00:18:02.080 "data_offset": 2048, 00:18:02.080 "data_size": 63488 00:18:02.080 }, 00:18:02.080 { 00:18:02.080 "name": "pt3", 00:18:02.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.080 "is_configured": true, 00:18:02.080 "data_offset": 2048, 00:18:02.080 "data_size": 63488 00:18:02.080 }, 00:18:02.080 { 00:18:02.080 "name": "pt4", 00:18:02.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.080 "is_configured": true, 00:18:02.080 "data_offset": 2048, 00:18:02.080 "data_size": 63488 00:18:02.080 } 00:18:02.080 ] 00:18:02.080 }' 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.080 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.649 [2024-11-27 09:55:03.488405] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.649 [2024-11-27 09:55:03.488466] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.649 [2024-11-27 09:55:03.488600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.649 [2024-11-27 09:55:03.488699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.649 [2024-11-27 09:55:03.488711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.649 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.650 [2024-11-27 09:55:03.588233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:02.650 [2024-11-27 09:55:03.588336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.650 [2024-11-27 09:55:03.588365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:02.650 [2024-11-27 09:55:03.588376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.650 [2024-11-27 09:55:03.591201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.650 [2024-11-27 09:55:03.591307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:02.650 [2024-11-27 09:55:03.591448] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:02.650 [2024-11-27 09:55:03.591508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:02.650 pt2 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.650 "name": "raid_bdev1", 00:18:02.650 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:02.650 "strip_size_kb": 64, 00:18:02.650 "state": "configuring", 00:18:02.650 "raid_level": "raid5f", 00:18:02.650 "superblock": true, 00:18:02.650 "num_base_bdevs": 4, 00:18:02.650 "num_base_bdevs_discovered": 1, 00:18:02.650 "num_base_bdevs_operational": 3, 00:18:02.650 "base_bdevs_list": [ 00:18:02.650 { 00:18:02.650 "name": null, 00:18:02.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.650 "is_configured": false, 00:18:02.650 "data_offset": 2048, 00:18:02.650 "data_size": 63488 00:18:02.650 }, 00:18:02.650 { 00:18:02.650 "name": "pt2", 00:18:02.650 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:02.650 "is_configured": true, 00:18:02.650 "data_offset": 2048, 00:18:02.650 "data_size": 63488 00:18:02.650 }, 00:18:02.650 { 00:18:02.650 "name": null, 00:18:02.650 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:02.650 "is_configured": false, 00:18:02.650 "data_offset": 2048, 00:18:02.650 "data_size": 63488 00:18:02.650 }, 00:18:02.650 { 00:18:02.650 "name": null, 00:18:02.650 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:02.650 "is_configured": false, 00:18:02.650 "data_offset": 2048, 00:18:02.650 "data_size": 63488 00:18:02.650 } 00:18:02.650 ] 00:18:02.650 }' 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.650 09:55:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.219 [2024-11-27 09:55:04.095432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:03.219 [2024-11-27 09:55:04.095713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.219 [2024-11-27 09:55:04.095772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:03.219 [2024-11-27 09:55:04.095840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.219 [2024-11-27 09:55:04.096495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.219 [2024-11-27 09:55:04.096586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:03.219 [2024-11-27 09:55:04.096749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:03.219 [2024-11-27 09:55:04.096817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:03.219 pt3 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.219 "name": "raid_bdev1", 00:18:03.219 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:03.219 "strip_size_kb": 64, 00:18:03.219 "state": "configuring", 00:18:03.219 "raid_level": "raid5f", 00:18:03.219 "superblock": true, 00:18:03.219 "num_base_bdevs": 4, 00:18:03.219 "num_base_bdevs_discovered": 2, 00:18:03.219 "num_base_bdevs_operational": 3, 00:18:03.219 "base_bdevs_list": [ 00:18:03.219 { 00:18:03.219 "name": null, 00:18:03.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.219 "is_configured": false, 00:18:03.219 "data_offset": 2048, 00:18:03.219 "data_size": 63488 00:18:03.219 }, 00:18:03.219 { 00:18:03.219 "name": "pt2", 00:18:03.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.219 "is_configured": true, 00:18:03.219 "data_offset": 2048, 00:18:03.219 "data_size": 63488 00:18:03.219 }, 00:18:03.219 { 00:18:03.219 "name": "pt3", 00:18:03.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.219 "is_configured": true, 00:18:03.219 "data_offset": 2048, 00:18:03.219 "data_size": 63488 00:18:03.219 }, 00:18:03.219 { 00:18:03.219 "name": null, 00:18:03.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.219 "is_configured": false, 00:18:03.219 "data_offset": 2048, 00:18:03.219 "data_size": 63488 00:18:03.219 } 00:18:03.219 ] 00:18:03.219 }' 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.219 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.479 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:03.479 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:03.479 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:03.479 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:03.479 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.479 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.479 [2024-11-27 09:55:04.498721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:03.479 [2024-11-27 09:55:04.498825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.479 [2024-11-27 09:55:04.498857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:03.479 [2024-11-27 09:55:04.498870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.479 [2024-11-27 09:55:04.499493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.479 [2024-11-27 09:55:04.499523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:03.479 [2024-11-27 09:55:04.499644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:03.479 [2024-11-27 09:55:04.499682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:03.479 [2024-11-27 09:55:04.499856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:03.479 [2024-11-27 09:55:04.499866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:03.479 [2024-11-27 09:55:04.500183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:03.479 [2024-11-27 09:55:04.507488] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:03.480 [2024-11-27 09:55:04.507527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:03.480 pt4 00:18:03.480 [2024-11-27 09:55:04.507932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.480 "name": "raid_bdev1", 00:18:03.480 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:03.480 "strip_size_kb": 64, 00:18:03.480 "state": "online", 00:18:03.480 "raid_level": "raid5f", 00:18:03.480 "superblock": true, 00:18:03.480 "num_base_bdevs": 4, 00:18:03.480 "num_base_bdevs_discovered": 3, 00:18:03.480 "num_base_bdevs_operational": 3, 00:18:03.480 "base_bdevs_list": [ 00:18:03.480 { 00:18:03.480 "name": null, 00:18:03.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.480 "is_configured": false, 00:18:03.480 "data_offset": 2048, 00:18:03.480 "data_size": 63488 00:18:03.480 }, 00:18:03.480 { 00:18:03.480 "name": "pt2", 00:18:03.480 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.480 "is_configured": true, 00:18:03.480 "data_offset": 2048, 00:18:03.480 "data_size": 63488 00:18:03.480 }, 00:18:03.480 { 00:18:03.480 "name": "pt3", 00:18:03.480 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:03.480 "is_configured": true, 00:18:03.480 "data_offset": 2048, 00:18:03.480 "data_size": 63488 00:18:03.480 }, 00:18:03.480 { 00:18:03.480 "name": "pt4", 00:18:03.480 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:03.480 "is_configured": true, 00:18:03.480 "data_offset": 2048, 00:18:03.480 "data_size": 63488 00:18:03.480 } 00:18:03.480 ] 00:18:03.480 }' 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.480 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 [2024-11-27 09:55:04.977799] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.049 [2024-11-27 09:55:04.977962] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.049 [2024-11-27 09:55:04.978148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.049 [2024-11-27 09:55:04.978289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.049 [2024-11-27 09:55:04.978358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:04.049 09:55:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.049 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.049 [2024-11-27 09:55:05.053678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:04.049 [2024-11-27 09:55:05.053896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.049 [2024-11-27 09:55:05.053945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:04.049 [2024-11-27 09:55:05.053968] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.049 [2024-11-27 09:55:05.057071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.049 [2024-11-27 09:55:05.057123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:04.049 [2024-11-27 09:55:05.057254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:04.049 [2024-11-27 09:55:05.057317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.049 [2024-11-27 09:55:05.057486] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:04.050 [2024-11-27 09:55:05.057502] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.050 [2024-11-27 09:55:05.057521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:04.050 [2024-11-27 09:55:05.057607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.050 [2024-11-27 09:55:05.057747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:04.050 pt1 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.050 "name": "raid_bdev1", 00:18:04.050 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:04.050 "strip_size_kb": 64, 00:18:04.050 "state": "configuring", 00:18:04.050 "raid_level": "raid5f", 00:18:04.050 "superblock": true, 00:18:04.050 "num_base_bdevs": 4, 00:18:04.050 "num_base_bdevs_discovered": 2, 00:18:04.050 "num_base_bdevs_operational": 3, 00:18:04.050 "base_bdevs_list": [ 00:18:04.050 { 00:18:04.050 "name": null, 00:18:04.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.050 "is_configured": false, 00:18:04.050 "data_offset": 2048, 00:18:04.050 "data_size": 63488 00:18:04.050 }, 00:18:04.050 { 00:18:04.050 "name": "pt2", 00:18:04.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.050 "is_configured": true, 00:18:04.050 "data_offset": 2048, 00:18:04.050 "data_size": 63488 00:18:04.050 }, 00:18:04.050 { 00:18:04.050 "name": "pt3", 00:18:04.050 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.050 "is_configured": true, 00:18:04.050 "data_offset": 2048, 00:18:04.050 "data_size": 63488 00:18:04.050 }, 00:18:04.050 { 00:18:04.050 "name": null, 00:18:04.050 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.050 "is_configured": false, 00:18:04.050 "data_offset": 2048, 00:18:04.050 "data_size": 63488 00:18:04.050 } 00:18:04.050 ] 00:18:04.050 }' 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.050 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.619 [2024-11-27 09:55:05.577068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:04.619 [2024-11-27 09:55:05.577280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.619 [2024-11-27 09:55:05.577340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:04.619 [2024-11-27 09:55:05.577403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.619 [2024-11-27 09:55:05.578116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.619 [2024-11-27 09:55:05.578208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:04.619 [2024-11-27 09:55:05.578399] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:04.619 [2024-11-27 09:55:05.578475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:04.619 [2024-11-27 09:55:05.578711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:04.619 [2024-11-27 09:55:05.578763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.619 [2024-11-27 09:55:05.579155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:04.619 [2024-11-27 09:55:05.587052] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:04.619 [2024-11-27 09:55:05.587133] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:04.619 [2024-11-27 09:55:05.587587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.619 pt4 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.619 "name": "raid_bdev1", 00:18:04.619 "uuid": "f54e0751-4120-4de4-86ee-6785335ccc01", 00:18:04.619 "strip_size_kb": 64, 00:18:04.619 "state": "online", 00:18:04.619 "raid_level": "raid5f", 00:18:04.619 "superblock": true, 00:18:04.619 "num_base_bdevs": 4, 00:18:04.619 "num_base_bdevs_discovered": 3, 00:18:04.619 "num_base_bdevs_operational": 3, 00:18:04.619 "base_bdevs_list": [ 00:18:04.619 { 00:18:04.619 "name": null, 00:18:04.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.619 "is_configured": false, 00:18:04.619 "data_offset": 2048, 00:18:04.619 "data_size": 63488 00:18:04.619 }, 00:18:04.619 { 00:18:04.619 "name": "pt2", 00:18:04.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.619 "is_configured": true, 00:18:04.619 "data_offset": 2048, 00:18:04.619 "data_size": 63488 00:18:04.619 }, 00:18:04.619 { 00:18:04.619 "name": "pt3", 00:18:04.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.619 "is_configured": true, 00:18:04.619 "data_offset": 2048, 00:18:04.619 "data_size": 63488 00:18:04.619 }, 00:18:04.619 { 00:18:04.619 "name": "pt4", 00:18:04.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:04.619 "is_configured": true, 00:18:04.619 "data_offset": 2048, 00:18:04.619 "data_size": 63488 00:18:04.619 } 00:18:04.619 ] 00:18:04.619 }' 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.619 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.879 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:04.879 09:55:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:04.879 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.879 09:55:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.137 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.137 09:55:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:05.137 09:55:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:05.137 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:05.138 [2024-11-27 09:55:06.054212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f54e0751-4120-4de4-86ee-6785335ccc01 '!=' f54e0751-4120-4de4-86ee-6785335ccc01 ']' 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84435 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84435 ']' 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84435 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84435 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.138 killing process with pid 84435 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84435' 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84435 00:18:05.138 [2024-11-27 09:55:06.141737] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.138 09:55:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84435 00:18:05.138 [2024-11-27 09:55:06.141910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.138 [2024-11-27 09:55:06.142072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.138 [2024-11-27 09:55:06.142107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:05.706 [2024-11-27 09:55:06.576873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.087 09:55:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:07.087 00:18:07.087 real 0m8.995s 00:18:07.087 user 0m13.795s 00:18:07.087 sys 0m1.883s 00:18:07.087 09:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.087 ************************************ 00:18:07.087 END TEST raid5f_superblock_test 00:18:07.087 ************************************ 00:18:07.087 09:55:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 09:55:07 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:07.087 09:55:07 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:07.087 09:55:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:07.087 09:55:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.087 09:55:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 ************************************ 00:18:07.087 START TEST raid5f_rebuild_test 00:18:07.087 ************************************ 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84919 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84919 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84919 ']' 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.087 09:55:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.087 [2024-11-27 09:55:08.006040] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:18:07.087 [2024-11-27 09:55:08.006297] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:07.087 Zero copy mechanism will not be used. 00:18:07.087 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84919 ] 00:18:07.087 [2024-11-27 09:55:08.165237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.346 [2024-11-27 09:55:08.307670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.606 [2024-11-27 09:55:08.545428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.606 [2024-11-27 09:55:08.545665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.867 BaseBdev1_malloc 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.867 [2024-11-27 09:55:08.927149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:07.867 [2024-11-27 09:55:08.927354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.867 [2024-11-27 09:55:08.927389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:07.867 [2024-11-27 09:55:08.927405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.867 [2024-11-27 09:55:08.930136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.867 [2024-11-27 09:55:08.930191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:07.867 BaseBdev1 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.867 BaseBdev2_malloc 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.867 [2024-11-27 09:55:08.989483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:07.867 [2024-11-27 09:55:08.989610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.867 [2024-11-27 09:55:08.989643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:07.867 [2024-11-27 09:55:08.989658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.867 [2024-11-27 09:55:08.992340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.867 [2024-11-27 09:55:08.992471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:07.867 BaseBdev2 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.867 09:55:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 BaseBdev3_malloc 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 [2024-11-27 09:55:09.082659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:08.127 [2024-11-27 09:55:09.082746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.127 [2024-11-27 09:55:09.082776] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:08.127 [2024-11-27 09:55:09.082789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.127 [2024-11-27 09:55:09.085495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.127 [2024-11-27 09:55:09.085628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:08.127 BaseBdev3 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 BaseBdev4_malloc 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 [2024-11-27 09:55:09.145791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:08.127 [2024-11-27 09:55:09.145888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.127 [2024-11-27 09:55:09.145917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:08.127 [2024-11-27 09:55:09.145929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.127 [2024-11-27 09:55:09.148575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.127 [2024-11-27 09:55:09.148714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:08.127 BaseBdev4 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 spare_malloc 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 spare_delay 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 [2024-11-27 09:55:09.221554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.127 [2024-11-27 09:55:09.221650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.127 [2024-11-27 09:55:09.221677] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:08.127 [2024-11-27 09:55:09.221689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.127 [2024-11-27 09:55:09.224415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.127 [2024-11-27 09:55:09.224464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.127 spare 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.127 [2024-11-27 09:55:09.233606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.127 [2024-11-27 09:55:09.236033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:08.127 [2024-11-27 09:55:09.236114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:08.127 [2024-11-27 09:55:09.236172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:08.127 [2024-11-27 09:55:09.236281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:08.127 [2024-11-27 09:55:09.236302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:08.127 [2024-11-27 09:55:09.236672] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:08.127 [2024-11-27 09:55:09.244267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:08.127 [2024-11-27 09:55:09.244342] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:08.127 [2024-11-27 09:55:09.244733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.127 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.386 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.386 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.386 "name": "raid_bdev1", 00:18:08.386 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:08.386 "strip_size_kb": 64, 00:18:08.386 "state": "online", 00:18:08.386 "raid_level": "raid5f", 00:18:08.386 "superblock": false, 00:18:08.386 "num_base_bdevs": 4, 00:18:08.386 "num_base_bdevs_discovered": 4, 00:18:08.386 "num_base_bdevs_operational": 4, 00:18:08.386 "base_bdevs_list": [ 00:18:08.386 { 00:18:08.386 "name": "BaseBdev1", 00:18:08.386 "uuid": "89d8aa08-db50-53ff-b9e2-e3da4eb4bbcf", 00:18:08.386 "is_configured": true, 00:18:08.386 "data_offset": 0, 00:18:08.386 "data_size": 65536 00:18:08.386 }, 00:18:08.386 { 00:18:08.386 "name": "BaseBdev2", 00:18:08.386 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:08.386 "is_configured": true, 00:18:08.386 "data_offset": 0, 00:18:08.386 "data_size": 65536 00:18:08.386 }, 00:18:08.386 { 00:18:08.386 "name": "BaseBdev3", 00:18:08.386 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:08.386 "is_configured": true, 00:18:08.386 "data_offset": 0, 00:18:08.386 "data_size": 65536 00:18:08.386 }, 00:18:08.386 { 00:18:08.386 "name": "BaseBdev4", 00:18:08.386 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:08.386 "is_configured": true, 00:18:08.386 "data_offset": 0, 00:18:08.386 "data_size": 65536 00:18:08.386 } 00:18:08.386 ] 00:18:08.386 }' 00:18:08.386 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.386 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.645 [2024-11-27 09:55:09.694422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:08.645 09:55:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:08.905 09:55:09 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:08.905 [2024-11-27 09:55:09.973741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:08.905 /dev/nbd0 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:08.905 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:08.905 1+0 records in 00:18:08.905 1+0 records out 00:18:08.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662069 s, 6.2 MB/s 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:09.165 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:09.733 512+0 records in 00:18:09.733 512+0 records out 00:18:09.733 100663296 bytes (101 MB, 96 MiB) copied, 0.512737 s, 196 MB/s 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:09.733 [2024-11-27 09:55:10.795065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.733 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.734 [2024-11-27 09:55:10.810694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.734 "name": "raid_bdev1", 00:18:09.734 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:09.734 "strip_size_kb": 64, 00:18:09.734 "state": "online", 00:18:09.734 "raid_level": "raid5f", 00:18:09.734 "superblock": false, 00:18:09.734 "num_base_bdevs": 4, 00:18:09.734 "num_base_bdevs_discovered": 3, 00:18:09.734 "num_base_bdevs_operational": 3, 00:18:09.734 "base_bdevs_list": [ 00:18:09.734 { 00:18:09.734 "name": null, 00:18:09.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.734 "is_configured": false, 00:18:09.734 "data_offset": 0, 00:18:09.734 "data_size": 65536 00:18:09.734 }, 00:18:09.734 { 00:18:09.734 "name": "BaseBdev2", 00:18:09.734 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:09.734 "is_configured": true, 00:18:09.734 "data_offset": 0, 00:18:09.734 "data_size": 65536 00:18:09.734 }, 00:18:09.734 { 00:18:09.734 "name": "BaseBdev3", 00:18:09.734 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:09.734 "is_configured": true, 00:18:09.734 "data_offset": 0, 00:18:09.734 "data_size": 65536 00:18:09.734 }, 00:18:09.734 { 00:18:09.734 "name": "BaseBdev4", 00:18:09.734 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:09.734 "is_configured": true, 00:18:09.734 "data_offset": 0, 00:18:09.734 "data_size": 65536 00:18:09.734 } 00:18:09.734 ] 00:18:09.734 }' 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.734 09:55:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.304 09:55:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:10.304 09:55:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.304 09:55:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.304 [2024-11-27 09:55:11.297951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.304 [2024-11-27 09:55:11.314843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:10.304 09:55:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.304 09:55:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:10.304 [2024-11-27 09:55:11.325329] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.238 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.495 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.495 "name": "raid_bdev1", 00:18:11.495 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:11.495 "strip_size_kb": 64, 00:18:11.495 "state": "online", 00:18:11.496 "raid_level": "raid5f", 00:18:11.496 "superblock": false, 00:18:11.496 "num_base_bdevs": 4, 00:18:11.496 "num_base_bdevs_discovered": 4, 00:18:11.496 "num_base_bdevs_operational": 4, 00:18:11.496 "process": { 00:18:11.496 "type": "rebuild", 00:18:11.496 "target": "spare", 00:18:11.496 "progress": { 00:18:11.496 "blocks": 19200, 00:18:11.496 "percent": 9 00:18:11.496 } 00:18:11.496 }, 00:18:11.496 "base_bdevs_list": [ 00:18:11.496 { 00:18:11.496 "name": "spare", 00:18:11.496 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:11.496 "is_configured": true, 00:18:11.496 "data_offset": 0, 00:18:11.496 "data_size": 65536 00:18:11.496 }, 00:18:11.496 { 00:18:11.496 "name": "BaseBdev2", 00:18:11.496 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:11.496 "is_configured": true, 00:18:11.496 "data_offset": 0, 00:18:11.496 "data_size": 65536 00:18:11.496 }, 00:18:11.496 { 00:18:11.496 "name": "BaseBdev3", 00:18:11.496 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:11.496 "is_configured": true, 00:18:11.496 "data_offset": 0, 00:18:11.496 "data_size": 65536 00:18:11.496 }, 00:18:11.496 { 00:18:11.496 "name": "BaseBdev4", 00:18:11.496 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:11.496 "is_configured": true, 00:18:11.496 "data_offset": 0, 00:18:11.496 "data_size": 65536 00:18:11.496 } 00:18:11.496 ] 00:18:11.496 }' 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.496 [2024-11-27 09:55:12.465829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.496 [2024-11-27 09:55:12.538830] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:11.496 [2024-11-27 09:55:12.538945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.496 [2024-11-27 09:55:12.538968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.496 [2024-11-27 09:55:12.538984] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.496 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.759 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.759 "name": "raid_bdev1", 00:18:11.759 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:11.759 "strip_size_kb": 64, 00:18:11.759 "state": "online", 00:18:11.759 "raid_level": "raid5f", 00:18:11.759 "superblock": false, 00:18:11.759 "num_base_bdevs": 4, 00:18:11.759 "num_base_bdevs_discovered": 3, 00:18:11.759 "num_base_bdevs_operational": 3, 00:18:11.760 "base_bdevs_list": [ 00:18:11.760 { 00:18:11.760 "name": null, 00:18:11.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.760 "is_configured": false, 00:18:11.760 "data_offset": 0, 00:18:11.760 "data_size": 65536 00:18:11.760 }, 00:18:11.760 { 00:18:11.760 "name": "BaseBdev2", 00:18:11.760 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:11.760 "is_configured": true, 00:18:11.760 "data_offset": 0, 00:18:11.760 "data_size": 65536 00:18:11.760 }, 00:18:11.760 { 00:18:11.760 "name": "BaseBdev3", 00:18:11.760 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:11.760 "is_configured": true, 00:18:11.760 "data_offset": 0, 00:18:11.760 "data_size": 65536 00:18:11.760 }, 00:18:11.760 { 00:18:11.760 "name": "BaseBdev4", 00:18:11.760 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:11.760 "is_configured": true, 00:18:11.760 "data_offset": 0, 00:18:11.760 "data_size": 65536 00:18:11.760 } 00:18:11.760 ] 00:18:11.760 }' 00:18:11.760 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.760 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.027 09:55:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.027 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.027 "name": "raid_bdev1", 00:18:12.027 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:12.027 "strip_size_kb": 64, 00:18:12.027 "state": "online", 00:18:12.027 "raid_level": "raid5f", 00:18:12.027 "superblock": false, 00:18:12.027 "num_base_bdevs": 4, 00:18:12.027 "num_base_bdevs_discovered": 3, 00:18:12.027 "num_base_bdevs_operational": 3, 00:18:12.027 "base_bdevs_list": [ 00:18:12.027 { 00:18:12.027 "name": null, 00:18:12.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.027 "is_configured": false, 00:18:12.027 "data_offset": 0, 00:18:12.027 "data_size": 65536 00:18:12.027 }, 00:18:12.027 { 00:18:12.027 "name": "BaseBdev2", 00:18:12.027 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:12.027 "is_configured": true, 00:18:12.027 "data_offset": 0, 00:18:12.027 "data_size": 65536 00:18:12.027 }, 00:18:12.027 { 00:18:12.027 "name": "BaseBdev3", 00:18:12.027 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:12.027 "is_configured": true, 00:18:12.028 "data_offset": 0, 00:18:12.028 "data_size": 65536 00:18:12.028 }, 00:18:12.028 { 00:18:12.028 "name": "BaseBdev4", 00:18:12.028 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:12.028 "is_configured": true, 00:18:12.028 "data_offset": 0, 00:18:12.028 "data_size": 65536 00:18:12.028 } 00:18:12.028 ] 00:18:12.028 }' 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.028 [2024-11-27 09:55:13.087366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:12.028 [2024-11-27 09:55:13.103170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.028 09:55:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:12.028 [2024-11-27 09:55:13.112985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.407 "name": "raid_bdev1", 00:18:13.407 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:13.407 "strip_size_kb": 64, 00:18:13.407 "state": "online", 00:18:13.407 "raid_level": "raid5f", 00:18:13.407 "superblock": false, 00:18:13.407 "num_base_bdevs": 4, 00:18:13.407 "num_base_bdevs_discovered": 4, 00:18:13.407 "num_base_bdevs_operational": 4, 00:18:13.407 "process": { 00:18:13.407 "type": "rebuild", 00:18:13.407 "target": "spare", 00:18:13.407 "progress": { 00:18:13.407 "blocks": 19200, 00:18:13.407 "percent": 9 00:18:13.407 } 00:18:13.407 }, 00:18:13.407 "base_bdevs_list": [ 00:18:13.407 { 00:18:13.407 "name": "spare", 00:18:13.407 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 }, 00:18:13.407 { 00:18:13.407 "name": "BaseBdev2", 00:18:13.407 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 }, 00:18:13.407 { 00:18:13.407 "name": "BaseBdev3", 00:18:13.407 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 }, 00:18:13.407 { 00:18:13.407 "name": "BaseBdev4", 00:18:13.407 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 } 00:18:13.407 ] 00:18:13.407 }' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=629 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.407 "name": "raid_bdev1", 00:18:13.407 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:13.407 "strip_size_kb": 64, 00:18:13.407 "state": "online", 00:18:13.407 "raid_level": "raid5f", 00:18:13.407 "superblock": false, 00:18:13.407 "num_base_bdevs": 4, 00:18:13.407 "num_base_bdevs_discovered": 4, 00:18:13.407 "num_base_bdevs_operational": 4, 00:18:13.407 "process": { 00:18:13.407 "type": "rebuild", 00:18:13.407 "target": "spare", 00:18:13.407 "progress": { 00:18:13.407 "blocks": 21120, 00:18:13.407 "percent": 10 00:18:13.407 } 00:18:13.407 }, 00:18:13.407 "base_bdevs_list": [ 00:18:13.407 { 00:18:13.407 "name": "spare", 00:18:13.407 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 }, 00:18:13.407 { 00:18:13.407 "name": "BaseBdev2", 00:18:13.407 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 }, 00:18:13.407 { 00:18:13.407 "name": "BaseBdev3", 00:18:13.407 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 }, 00:18:13.407 { 00:18:13.407 "name": "BaseBdev4", 00:18:13.407 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:13.407 "is_configured": true, 00:18:13.407 "data_offset": 0, 00:18:13.407 "data_size": 65536 00:18:13.407 } 00:18:13.407 ] 00:18:13.407 }' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:13.407 09:55:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.347 "name": "raid_bdev1", 00:18:14.347 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:14.347 "strip_size_kb": 64, 00:18:14.347 "state": "online", 00:18:14.347 "raid_level": "raid5f", 00:18:14.347 "superblock": false, 00:18:14.347 "num_base_bdevs": 4, 00:18:14.347 "num_base_bdevs_discovered": 4, 00:18:14.347 "num_base_bdevs_operational": 4, 00:18:14.347 "process": { 00:18:14.347 "type": "rebuild", 00:18:14.347 "target": "spare", 00:18:14.347 "progress": { 00:18:14.347 "blocks": 42240, 00:18:14.347 "percent": 21 00:18:14.347 } 00:18:14.347 }, 00:18:14.347 "base_bdevs_list": [ 00:18:14.347 { 00:18:14.347 "name": "spare", 00:18:14.347 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:14.347 "is_configured": true, 00:18:14.347 "data_offset": 0, 00:18:14.347 "data_size": 65536 00:18:14.347 }, 00:18:14.347 { 00:18:14.347 "name": "BaseBdev2", 00:18:14.347 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:14.347 "is_configured": true, 00:18:14.347 "data_offset": 0, 00:18:14.347 "data_size": 65536 00:18:14.347 }, 00:18:14.347 { 00:18:14.347 "name": "BaseBdev3", 00:18:14.347 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:14.347 "is_configured": true, 00:18:14.347 "data_offset": 0, 00:18:14.347 "data_size": 65536 00:18:14.347 }, 00:18:14.347 { 00:18:14.347 "name": "BaseBdev4", 00:18:14.347 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:14.347 "is_configured": true, 00:18:14.347 "data_offset": 0, 00:18:14.347 "data_size": 65536 00:18:14.347 } 00:18:14.347 ] 00:18:14.347 }' 00:18:14.347 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.607 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.607 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.607 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.607 09:55:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.546 "name": "raid_bdev1", 00:18:15.546 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:15.546 "strip_size_kb": 64, 00:18:15.546 "state": "online", 00:18:15.546 "raid_level": "raid5f", 00:18:15.546 "superblock": false, 00:18:15.546 "num_base_bdevs": 4, 00:18:15.546 "num_base_bdevs_discovered": 4, 00:18:15.546 "num_base_bdevs_operational": 4, 00:18:15.546 "process": { 00:18:15.546 "type": "rebuild", 00:18:15.546 "target": "spare", 00:18:15.546 "progress": { 00:18:15.546 "blocks": 65280, 00:18:15.546 "percent": 33 00:18:15.546 } 00:18:15.546 }, 00:18:15.546 "base_bdevs_list": [ 00:18:15.546 { 00:18:15.546 "name": "spare", 00:18:15.546 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:15.546 "is_configured": true, 00:18:15.546 "data_offset": 0, 00:18:15.546 "data_size": 65536 00:18:15.546 }, 00:18:15.546 { 00:18:15.546 "name": "BaseBdev2", 00:18:15.546 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:15.546 "is_configured": true, 00:18:15.546 "data_offset": 0, 00:18:15.546 "data_size": 65536 00:18:15.546 }, 00:18:15.546 { 00:18:15.546 "name": "BaseBdev3", 00:18:15.546 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:15.546 "is_configured": true, 00:18:15.546 "data_offset": 0, 00:18:15.546 "data_size": 65536 00:18:15.546 }, 00:18:15.546 { 00:18:15.546 "name": "BaseBdev4", 00:18:15.546 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:15.546 "is_configured": true, 00:18:15.546 "data_offset": 0, 00:18:15.546 "data_size": 65536 00:18:15.546 } 00:18:15.546 ] 00:18:15.546 }' 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:15.546 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.805 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:15.805 09:55:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.744 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.745 "name": "raid_bdev1", 00:18:16.745 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:16.745 "strip_size_kb": 64, 00:18:16.745 "state": "online", 00:18:16.745 "raid_level": "raid5f", 00:18:16.745 "superblock": false, 00:18:16.745 "num_base_bdevs": 4, 00:18:16.745 "num_base_bdevs_discovered": 4, 00:18:16.745 "num_base_bdevs_operational": 4, 00:18:16.745 "process": { 00:18:16.745 "type": "rebuild", 00:18:16.745 "target": "spare", 00:18:16.745 "progress": { 00:18:16.745 "blocks": 86400, 00:18:16.745 "percent": 43 00:18:16.745 } 00:18:16.745 }, 00:18:16.745 "base_bdevs_list": [ 00:18:16.745 { 00:18:16.745 "name": "spare", 00:18:16.745 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:16.745 "is_configured": true, 00:18:16.745 "data_offset": 0, 00:18:16.745 "data_size": 65536 00:18:16.745 }, 00:18:16.745 { 00:18:16.745 "name": "BaseBdev2", 00:18:16.745 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:16.745 "is_configured": true, 00:18:16.745 "data_offset": 0, 00:18:16.745 "data_size": 65536 00:18:16.745 }, 00:18:16.745 { 00:18:16.745 "name": "BaseBdev3", 00:18:16.745 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:16.745 "is_configured": true, 00:18:16.745 "data_offset": 0, 00:18:16.745 "data_size": 65536 00:18:16.745 }, 00:18:16.745 { 00:18:16.745 "name": "BaseBdev4", 00:18:16.745 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:16.745 "is_configured": true, 00:18:16.745 "data_offset": 0, 00:18:16.745 "data_size": 65536 00:18:16.745 } 00:18:16.745 ] 00:18:16.745 }' 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:16.745 09:55:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.124 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.125 "name": "raid_bdev1", 00:18:18.125 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:18.125 "strip_size_kb": 64, 00:18:18.125 "state": "online", 00:18:18.125 "raid_level": "raid5f", 00:18:18.125 "superblock": false, 00:18:18.125 "num_base_bdevs": 4, 00:18:18.125 "num_base_bdevs_discovered": 4, 00:18:18.125 "num_base_bdevs_operational": 4, 00:18:18.125 "process": { 00:18:18.125 "type": "rebuild", 00:18:18.125 "target": "spare", 00:18:18.125 "progress": { 00:18:18.125 "blocks": 107520, 00:18:18.125 "percent": 54 00:18:18.125 } 00:18:18.125 }, 00:18:18.125 "base_bdevs_list": [ 00:18:18.125 { 00:18:18.125 "name": "spare", 00:18:18.125 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:18.125 "is_configured": true, 00:18:18.125 "data_offset": 0, 00:18:18.125 "data_size": 65536 00:18:18.125 }, 00:18:18.125 { 00:18:18.125 "name": "BaseBdev2", 00:18:18.125 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:18.125 "is_configured": true, 00:18:18.125 "data_offset": 0, 00:18:18.125 "data_size": 65536 00:18:18.125 }, 00:18:18.125 { 00:18:18.125 "name": "BaseBdev3", 00:18:18.125 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:18.125 "is_configured": true, 00:18:18.125 "data_offset": 0, 00:18:18.125 "data_size": 65536 00:18:18.125 }, 00:18:18.125 { 00:18:18.125 "name": "BaseBdev4", 00:18:18.125 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:18.125 "is_configured": true, 00:18:18.125 "data_offset": 0, 00:18:18.125 "data_size": 65536 00:18:18.125 } 00:18:18.125 ] 00:18:18.125 }' 00:18:18.125 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.125 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:18.125 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.125 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:18.125 09:55:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.063 09:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.063 09:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.063 09:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.063 "name": "raid_bdev1", 00:18:19.063 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:19.063 "strip_size_kb": 64, 00:18:19.063 "state": "online", 00:18:19.063 "raid_level": "raid5f", 00:18:19.063 "superblock": false, 00:18:19.063 "num_base_bdevs": 4, 00:18:19.063 "num_base_bdevs_discovered": 4, 00:18:19.063 "num_base_bdevs_operational": 4, 00:18:19.063 "process": { 00:18:19.063 "type": "rebuild", 00:18:19.063 "target": "spare", 00:18:19.063 "progress": { 00:18:19.063 "blocks": 130560, 00:18:19.063 "percent": 66 00:18:19.063 } 00:18:19.063 }, 00:18:19.063 "base_bdevs_list": [ 00:18:19.063 { 00:18:19.063 "name": "spare", 00:18:19.064 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:19.064 "is_configured": true, 00:18:19.064 "data_offset": 0, 00:18:19.064 "data_size": 65536 00:18:19.064 }, 00:18:19.064 { 00:18:19.064 "name": "BaseBdev2", 00:18:19.064 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:19.064 "is_configured": true, 00:18:19.064 "data_offset": 0, 00:18:19.064 "data_size": 65536 00:18:19.064 }, 00:18:19.064 { 00:18:19.064 "name": "BaseBdev3", 00:18:19.064 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:19.064 "is_configured": true, 00:18:19.064 "data_offset": 0, 00:18:19.064 "data_size": 65536 00:18:19.064 }, 00:18:19.064 { 00:18:19.064 "name": "BaseBdev4", 00:18:19.064 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:19.064 "is_configured": true, 00:18:19.064 "data_offset": 0, 00:18:19.064 "data_size": 65536 00:18:19.064 } 00:18:19.064 ] 00:18:19.064 }' 00:18:19.064 09:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.064 09:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.064 09:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.064 09:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.064 09:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.443 09:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.444 "name": "raid_bdev1", 00:18:20.444 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:20.444 "strip_size_kb": 64, 00:18:20.444 "state": "online", 00:18:20.444 "raid_level": "raid5f", 00:18:20.444 "superblock": false, 00:18:20.444 "num_base_bdevs": 4, 00:18:20.444 "num_base_bdevs_discovered": 4, 00:18:20.444 "num_base_bdevs_operational": 4, 00:18:20.444 "process": { 00:18:20.444 "type": "rebuild", 00:18:20.444 "target": "spare", 00:18:20.444 "progress": { 00:18:20.444 "blocks": 151680, 00:18:20.444 "percent": 77 00:18:20.444 } 00:18:20.444 }, 00:18:20.444 "base_bdevs_list": [ 00:18:20.444 { 00:18:20.444 "name": "spare", 00:18:20.444 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:20.444 "is_configured": true, 00:18:20.444 "data_offset": 0, 00:18:20.444 "data_size": 65536 00:18:20.444 }, 00:18:20.444 { 00:18:20.444 "name": "BaseBdev2", 00:18:20.444 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:20.444 "is_configured": true, 00:18:20.444 "data_offset": 0, 00:18:20.444 "data_size": 65536 00:18:20.444 }, 00:18:20.444 { 00:18:20.444 "name": "BaseBdev3", 00:18:20.444 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:20.444 "is_configured": true, 00:18:20.444 "data_offset": 0, 00:18:20.444 "data_size": 65536 00:18:20.444 }, 00:18:20.444 { 00:18:20.444 "name": "BaseBdev4", 00:18:20.444 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:20.444 "is_configured": true, 00:18:20.444 "data_offset": 0, 00:18:20.444 "data_size": 65536 00:18:20.444 } 00:18:20.444 ] 00:18:20.444 }' 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.444 09:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.382 "name": "raid_bdev1", 00:18:21.382 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:21.382 "strip_size_kb": 64, 00:18:21.382 "state": "online", 00:18:21.382 "raid_level": "raid5f", 00:18:21.382 "superblock": false, 00:18:21.382 "num_base_bdevs": 4, 00:18:21.382 "num_base_bdevs_discovered": 4, 00:18:21.382 "num_base_bdevs_operational": 4, 00:18:21.382 "process": { 00:18:21.382 "type": "rebuild", 00:18:21.382 "target": "spare", 00:18:21.382 "progress": { 00:18:21.382 "blocks": 172800, 00:18:21.382 "percent": 87 00:18:21.382 } 00:18:21.382 }, 00:18:21.382 "base_bdevs_list": [ 00:18:21.382 { 00:18:21.382 "name": "spare", 00:18:21.382 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:21.382 "is_configured": true, 00:18:21.382 "data_offset": 0, 00:18:21.382 "data_size": 65536 00:18:21.382 }, 00:18:21.382 { 00:18:21.382 "name": "BaseBdev2", 00:18:21.382 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:21.382 "is_configured": true, 00:18:21.382 "data_offset": 0, 00:18:21.382 "data_size": 65536 00:18:21.382 }, 00:18:21.382 { 00:18:21.382 "name": "BaseBdev3", 00:18:21.382 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:21.382 "is_configured": true, 00:18:21.382 "data_offset": 0, 00:18:21.382 "data_size": 65536 00:18:21.382 }, 00:18:21.382 { 00:18:21.382 "name": "BaseBdev4", 00:18:21.382 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:21.382 "is_configured": true, 00:18:21.382 "data_offset": 0, 00:18:21.382 "data_size": 65536 00:18:21.382 } 00:18:21.382 ] 00:18:21.382 }' 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.382 09:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:22.321 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.581 "name": "raid_bdev1", 00:18:22.581 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:22.581 "strip_size_kb": 64, 00:18:22.581 "state": "online", 00:18:22.581 "raid_level": "raid5f", 00:18:22.581 "superblock": false, 00:18:22.581 "num_base_bdevs": 4, 00:18:22.581 "num_base_bdevs_discovered": 4, 00:18:22.581 "num_base_bdevs_operational": 4, 00:18:22.581 "process": { 00:18:22.581 "type": "rebuild", 00:18:22.581 "target": "spare", 00:18:22.581 "progress": { 00:18:22.581 "blocks": 195840, 00:18:22.581 "percent": 99 00:18:22.581 } 00:18:22.581 }, 00:18:22.581 "base_bdevs_list": [ 00:18:22.581 { 00:18:22.581 "name": "spare", 00:18:22.581 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:22.581 "is_configured": true, 00:18:22.581 "data_offset": 0, 00:18:22.581 "data_size": 65536 00:18:22.581 }, 00:18:22.581 { 00:18:22.581 "name": "BaseBdev2", 00:18:22.581 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:22.581 "is_configured": true, 00:18:22.581 "data_offset": 0, 00:18:22.581 "data_size": 65536 00:18:22.581 }, 00:18:22.581 { 00:18:22.581 "name": "BaseBdev3", 00:18:22.581 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:22.581 "is_configured": true, 00:18:22.581 "data_offset": 0, 00:18:22.581 "data_size": 65536 00:18:22.581 }, 00:18:22.581 { 00:18:22.581 "name": "BaseBdev4", 00:18:22.581 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:22.581 "is_configured": true, 00:18:22.581 "data_offset": 0, 00:18:22.581 "data_size": 65536 00:18:22.581 } 00:18:22.581 ] 00:18:22.581 }' 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.581 [2024-11-27 09:55:23.521991] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:22.581 [2024-11-27 09:55:23.522141] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:22.581 [2024-11-27 09:55:23.522232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.581 09:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.563 "name": "raid_bdev1", 00:18:23.563 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:23.563 "strip_size_kb": 64, 00:18:23.563 "state": "online", 00:18:23.563 "raid_level": "raid5f", 00:18:23.563 "superblock": false, 00:18:23.563 "num_base_bdevs": 4, 00:18:23.563 "num_base_bdevs_discovered": 4, 00:18:23.563 "num_base_bdevs_operational": 4, 00:18:23.563 "base_bdevs_list": [ 00:18:23.563 { 00:18:23.563 "name": "spare", 00:18:23.563 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:23.563 "is_configured": true, 00:18:23.563 "data_offset": 0, 00:18:23.563 "data_size": 65536 00:18:23.563 }, 00:18:23.563 { 00:18:23.563 "name": "BaseBdev2", 00:18:23.563 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:23.563 "is_configured": true, 00:18:23.563 "data_offset": 0, 00:18:23.563 "data_size": 65536 00:18:23.563 }, 00:18:23.563 { 00:18:23.563 "name": "BaseBdev3", 00:18:23.563 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:23.563 "is_configured": true, 00:18:23.563 "data_offset": 0, 00:18:23.563 "data_size": 65536 00:18:23.563 }, 00:18:23.563 { 00:18:23.563 "name": "BaseBdev4", 00:18:23.563 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:23.563 "is_configured": true, 00:18:23.563 "data_offset": 0, 00:18:23.563 "data_size": 65536 00:18:23.563 } 00:18:23.563 ] 00:18:23.563 }' 00:18:23.563 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.823 "name": "raid_bdev1", 00:18:23.823 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:23.823 "strip_size_kb": 64, 00:18:23.823 "state": "online", 00:18:23.823 "raid_level": "raid5f", 00:18:23.823 "superblock": false, 00:18:23.823 "num_base_bdevs": 4, 00:18:23.823 "num_base_bdevs_discovered": 4, 00:18:23.823 "num_base_bdevs_operational": 4, 00:18:23.823 "base_bdevs_list": [ 00:18:23.823 { 00:18:23.823 "name": "spare", 00:18:23.823 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 }, 00:18:23.823 { 00:18:23.823 "name": "BaseBdev2", 00:18:23.823 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 }, 00:18:23.823 { 00:18:23.823 "name": "BaseBdev3", 00:18:23.823 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 }, 00:18:23.823 { 00:18:23.823 "name": "BaseBdev4", 00:18:23.823 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 } 00:18:23.823 ] 00:18:23.823 }' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.823 "name": "raid_bdev1", 00:18:23.823 "uuid": "7347225a-4420-4399-9bfa-fc20a7b21bb3", 00:18:23.823 "strip_size_kb": 64, 00:18:23.823 "state": "online", 00:18:23.823 "raid_level": "raid5f", 00:18:23.823 "superblock": false, 00:18:23.823 "num_base_bdevs": 4, 00:18:23.823 "num_base_bdevs_discovered": 4, 00:18:23.823 "num_base_bdevs_operational": 4, 00:18:23.823 "base_bdevs_list": [ 00:18:23.823 { 00:18:23.823 "name": "spare", 00:18:23.823 "uuid": "b114b43b-ee9a-5907-927b-1342e5734c41", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 }, 00:18:23.823 { 00:18:23.823 "name": "BaseBdev2", 00:18:23.823 "uuid": "26033f4d-5a6f-5a87-beff-70bf4419d6cd", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 }, 00:18:23.823 { 00:18:23.823 "name": "BaseBdev3", 00:18:23.823 "uuid": "0111010a-70fa-5fe9-a9a0-67b3bdf5aded", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 }, 00:18:23.823 { 00:18:23.823 "name": "BaseBdev4", 00:18:23.823 "uuid": "66db6964-517b-55e0-ae5c-ffc34136facd", 00:18:23.823 "is_configured": true, 00:18:23.823 "data_offset": 0, 00:18:23.823 "data_size": 65536 00:18:23.823 } 00:18:23.823 ] 00:18:23.823 }' 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.823 09:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.391 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:24.391 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.391 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.391 [2024-11-27 09:55:25.267992] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.392 [2024-11-27 09:55:25.268063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.392 [2024-11-27 09:55:25.268214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.392 [2024-11-27 09:55:25.268348] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.392 [2024-11-27 09:55:25.268364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.392 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:24.651 /dev/nbd0 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.651 1+0 records in 00:18:24.651 1+0 records out 00:18:24.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391547 s, 10.5 MB/s 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.651 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:24.910 /dev/nbd1 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:24.910 1+0 records in 00:18:24.910 1+0 records out 00:18:24.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520929 s, 7.9 MB/s 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:24.910 09:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.169 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:25.430 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84919 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84919 ']' 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84919 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84919 00:18:25.690 killing process with pid 84919 00:18:25.690 Received shutdown signal, test time was about 60.000000 seconds 00:18:25.690 00:18:25.690 Latency(us) 00:18:25.690 [2024-11-27T09:55:26.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.690 [2024-11-27T09:55:26.823Z] =================================================================================================================== 00:18:25.690 [2024-11-27T09:55:26.823Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84919' 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84919 00:18:25.690 [2024-11-27 09:55:26.653662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.690 09:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84919 00:18:26.259 [2024-11-27 09:55:27.192877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:18:27.689 00:18:27.689 real 0m20.537s 00:18:27.689 user 0m24.313s 00:18:27.689 sys 0m2.489s 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.689 ************************************ 00:18:27.689 END TEST raid5f_rebuild_test 00:18:27.689 ************************************ 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.689 09:55:28 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:18:27.689 09:55:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:27.689 09:55:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.689 09:55:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.689 ************************************ 00:18:27.689 START TEST raid5f_rebuild_test_sb 00:18:27.689 ************************************ 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.689 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85439 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85439 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85439 ']' 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.690 09:55:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.690 [2024-11-27 09:55:28.625796] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:18:27.690 [2024-11-27 09:55:28.626045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:27.690 Zero copy mechanism will not be used. 00:18:27.690 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85439 ] 00:18:27.690 [2024-11-27 09:55:28.809845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.949 [2024-11-27 09:55:28.953580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.215 [2024-11-27 09:55:29.200519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.215 [2024-11-27 09:55:29.200712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.473 BaseBdev1_malloc 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.473 [2024-11-27 09:55:29.532955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:28.473 [2024-11-27 09:55:29.533064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.473 [2024-11-27 09:55:29.533094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:28.473 [2024-11-27 09:55:29.533107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.473 [2024-11-27 09:55:29.535704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.473 [2024-11-27 09:55:29.535839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.473 BaseBdev1 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.473 BaseBdev2_malloc 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.473 [2024-11-27 09:55:29.595451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:28.473 [2024-11-27 09:55:29.595547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.473 [2024-11-27 09:55:29.595577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:28.473 [2024-11-27 09:55:29.595589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.473 [2024-11-27 09:55:29.598241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.473 [2024-11-27 09:55:29.598367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:28.473 BaseBdev2 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.473 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 BaseBdev3_malloc 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 [2024-11-27 09:55:29.672985] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:28.733 [2024-11-27 09:55:29.673084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.733 [2024-11-27 09:55:29.673115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:28.733 [2024-11-27 09:55:29.673127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.733 [2024-11-27 09:55:29.675826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.733 [2024-11-27 09:55:29.675874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:28.733 BaseBdev3 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 BaseBdev4_malloc 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 [2024-11-27 09:55:29.736499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:28.733 [2024-11-27 09:55:29.736599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.733 [2024-11-27 09:55:29.736626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:28.733 [2024-11-27 09:55:29.736638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.733 [2024-11-27 09:55:29.739231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.733 [2024-11-27 09:55:29.739277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:28.733 BaseBdev4 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 spare_malloc 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 spare_delay 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 [2024-11-27 09:55:29.813750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:28.733 [2024-11-27 09:55:29.813833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.733 [2024-11-27 09:55:29.813860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:28.733 [2024-11-27 09:55:29.813871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.733 [2024-11-27 09:55:29.816561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.733 [2024-11-27 09:55:29.816607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:28.733 spare 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.733 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.733 [2024-11-27 09:55:29.825791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.733 [2024-11-27 09:55:29.828069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.734 [2024-11-27 09:55:29.828141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.734 [2024-11-27 09:55:29.828196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:28.734 [2024-11-27 09:55:29.828423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:28.734 [2024-11-27 09:55:29.828440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:28.734 [2024-11-27 09:55:29.828782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.734 [2024-11-27 09:55:29.836481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:28.734 [2024-11-27 09:55:29.836509] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:28.734 [2024-11-27 09:55:29.836800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.734 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.994 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.994 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.994 "name": "raid_bdev1", 00:18:28.994 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:28.994 "strip_size_kb": 64, 00:18:28.994 "state": "online", 00:18:28.994 "raid_level": "raid5f", 00:18:28.994 "superblock": true, 00:18:28.994 "num_base_bdevs": 4, 00:18:28.994 "num_base_bdevs_discovered": 4, 00:18:28.994 "num_base_bdevs_operational": 4, 00:18:28.994 "base_bdevs_list": [ 00:18:28.994 { 00:18:28.994 "name": "BaseBdev1", 00:18:28.994 "uuid": "b20309e8-be3b-5523-bee7-3523a1a740e7", 00:18:28.994 "is_configured": true, 00:18:28.994 "data_offset": 2048, 00:18:28.994 "data_size": 63488 00:18:28.994 }, 00:18:28.994 { 00:18:28.994 "name": "BaseBdev2", 00:18:28.994 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:28.994 "is_configured": true, 00:18:28.994 "data_offset": 2048, 00:18:28.994 "data_size": 63488 00:18:28.994 }, 00:18:28.994 { 00:18:28.994 "name": "BaseBdev3", 00:18:28.994 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:28.994 "is_configured": true, 00:18:28.994 "data_offset": 2048, 00:18:28.994 "data_size": 63488 00:18:28.994 }, 00:18:28.994 { 00:18:28.994 "name": "BaseBdev4", 00:18:28.994 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:28.994 "is_configured": true, 00:18:28.994 "data_offset": 2048, 00:18:28.994 "data_size": 63488 00:18:28.994 } 00:18:28.994 ] 00:18:28.994 }' 00:18:28.994 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.994 09:55:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.253 [2024-11-27 09:55:30.234350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.253 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.254 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:29.513 [2024-11-27 09:55:30.525648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:29.513 /dev/nbd0 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.513 1+0 records in 00:18:29.513 1+0 records out 00:18:29.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043255 s, 9.5 MB/s 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:29.513 09:55:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:18:30.083 496+0 records in 00:18:30.083 496+0 records out 00:18:30.083 97517568 bytes (98 MB, 93 MiB) copied, 0.491983 s, 198 MB/s 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:30.083 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:30.342 [2024-11-27 09:55:31.311945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:30.342 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.343 [2024-11-27 09:55:31.352587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.343 "name": "raid_bdev1", 00:18:30.343 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:30.343 "strip_size_kb": 64, 00:18:30.343 "state": "online", 00:18:30.343 "raid_level": "raid5f", 00:18:30.343 "superblock": true, 00:18:30.343 "num_base_bdevs": 4, 00:18:30.343 "num_base_bdevs_discovered": 3, 00:18:30.343 "num_base_bdevs_operational": 3, 00:18:30.343 "base_bdevs_list": [ 00:18:30.343 { 00:18:30.343 "name": null, 00:18:30.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.343 "is_configured": false, 00:18:30.343 "data_offset": 0, 00:18:30.343 "data_size": 63488 00:18:30.343 }, 00:18:30.343 { 00:18:30.343 "name": "BaseBdev2", 00:18:30.343 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:30.343 "is_configured": true, 00:18:30.343 "data_offset": 2048, 00:18:30.343 "data_size": 63488 00:18:30.343 }, 00:18:30.343 { 00:18:30.343 "name": "BaseBdev3", 00:18:30.343 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:30.343 "is_configured": true, 00:18:30.343 "data_offset": 2048, 00:18:30.343 "data_size": 63488 00:18:30.343 }, 00:18:30.343 { 00:18:30.343 "name": "BaseBdev4", 00:18:30.343 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:30.343 "is_configured": true, 00:18:30.343 "data_offset": 2048, 00:18:30.343 "data_size": 63488 00:18:30.343 } 00:18:30.343 ] 00:18:30.343 }' 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.343 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.910 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:30.910 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.910 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.910 [2024-11-27 09:55:31.843724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.910 [2024-11-27 09:55:31.860647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:18:30.911 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.911 09:55:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:30.911 [2024-11-27 09:55:31.871616] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:31.850 "name": "raid_bdev1", 00:18:31.850 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:31.850 "strip_size_kb": 64, 00:18:31.850 "state": "online", 00:18:31.850 "raid_level": "raid5f", 00:18:31.850 "superblock": true, 00:18:31.850 "num_base_bdevs": 4, 00:18:31.850 "num_base_bdevs_discovered": 4, 00:18:31.850 "num_base_bdevs_operational": 4, 00:18:31.850 "process": { 00:18:31.850 "type": "rebuild", 00:18:31.850 "target": "spare", 00:18:31.850 "progress": { 00:18:31.850 "blocks": 19200, 00:18:31.850 "percent": 10 00:18:31.850 } 00:18:31.850 }, 00:18:31.850 "base_bdevs_list": [ 00:18:31.850 { 00:18:31.850 "name": "spare", 00:18:31.850 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:31.850 "is_configured": true, 00:18:31.850 "data_offset": 2048, 00:18:31.850 "data_size": 63488 00:18:31.850 }, 00:18:31.850 { 00:18:31.850 "name": "BaseBdev2", 00:18:31.850 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:31.850 "is_configured": true, 00:18:31.850 "data_offset": 2048, 00:18:31.850 "data_size": 63488 00:18:31.850 }, 00:18:31.850 { 00:18:31.850 "name": "BaseBdev3", 00:18:31.850 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:31.850 "is_configured": true, 00:18:31.850 "data_offset": 2048, 00:18:31.850 "data_size": 63488 00:18:31.850 }, 00:18:31.850 { 00:18:31.850 "name": "BaseBdev4", 00:18:31.850 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:31.850 "is_configured": true, 00:18:31.850 "data_offset": 2048, 00:18:31.850 "data_size": 63488 00:18:31.850 } 00:18:31.850 ] 00:18:31.850 }' 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.850 09:55:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.110 [2024-11-27 09:55:33.011862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.110 [2024-11-27 09:55:33.085411] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.110 [2024-11-27 09:55:33.085675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.110 [2024-11-27 09:55:33.085722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.110 [2024-11-27 09:55:33.085749] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.110 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.110 "name": "raid_bdev1", 00:18:32.110 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:32.110 "strip_size_kb": 64, 00:18:32.110 "state": "online", 00:18:32.110 "raid_level": "raid5f", 00:18:32.110 "superblock": true, 00:18:32.110 "num_base_bdevs": 4, 00:18:32.110 "num_base_bdevs_discovered": 3, 00:18:32.110 "num_base_bdevs_operational": 3, 00:18:32.110 "base_bdevs_list": [ 00:18:32.110 { 00:18:32.110 "name": null, 00:18:32.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.110 "is_configured": false, 00:18:32.110 "data_offset": 0, 00:18:32.110 "data_size": 63488 00:18:32.110 }, 00:18:32.110 { 00:18:32.110 "name": "BaseBdev2", 00:18:32.110 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:32.110 "is_configured": true, 00:18:32.110 "data_offset": 2048, 00:18:32.110 "data_size": 63488 00:18:32.110 }, 00:18:32.110 { 00:18:32.110 "name": "BaseBdev3", 00:18:32.110 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:32.110 "is_configured": true, 00:18:32.110 "data_offset": 2048, 00:18:32.110 "data_size": 63488 00:18:32.110 }, 00:18:32.110 { 00:18:32.110 "name": "BaseBdev4", 00:18:32.110 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:32.110 "is_configured": true, 00:18:32.110 "data_offset": 2048, 00:18:32.110 "data_size": 63488 00:18:32.110 } 00:18:32.110 ] 00:18:32.111 }' 00:18:32.111 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.111 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.679 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.679 "name": "raid_bdev1", 00:18:32.679 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:32.679 "strip_size_kb": 64, 00:18:32.679 "state": "online", 00:18:32.679 "raid_level": "raid5f", 00:18:32.679 "superblock": true, 00:18:32.679 "num_base_bdevs": 4, 00:18:32.679 "num_base_bdevs_discovered": 3, 00:18:32.679 "num_base_bdevs_operational": 3, 00:18:32.679 "base_bdevs_list": [ 00:18:32.679 { 00:18:32.679 "name": null, 00:18:32.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.679 "is_configured": false, 00:18:32.679 "data_offset": 0, 00:18:32.679 "data_size": 63488 00:18:32.679 }, 00:18:32.679 { 00:18:32.679 "name": "BaseBdev2", 00:18:32.680 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:32.680 "is_configured": true, 00:18:32.680 "data_offset": 2048, 00:18:32.680 "data_size": 63488 00:18:32.680 }, 00:18:32.680 { 00:18:32.680 "name": "BaseBdev3", 00:18:32.680 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:32.680 "is_configured": true, 00:18:32.680 "data_offset": 2048, 00:18:32.680 "data_size": 63488 00:18:32.680 }, 00:18:32.680 { 00:18:32.680 "name": "BaseBdev4", 00:18:32.680 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:32.680 "is_configured": true, 00:18:32.680 "data_offset": 2048, 00:18:32.680 "data_size": 63488 00:18:32.680 } 00:18:32.680 ] 00:18:32.680 }' 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.680 [2024-11-27 09:55:33.728768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.680 [2024-11-27 09:55:33.744981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.680 09:55:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:32.680 [2024-11-27 09:55:33.755425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.057 "name": "raid_bdev1", 00:18:34.057 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:34.057 "strip_size_kb": 64, 00:18:34.057 "state": "online", 00:18:34.057 "raid_level": "raid5f", 00:18:34.057 "superblock": true, 00:18:34.057 "num_base_bdevs": 4, 00:18:34.057 "num_base_bdevs_discovered": 4, 00:18:34.057 "num_base_bdevs_operational": 4, 00:18:34.057 "process": { 00:18:34.057 "type": "rebuild", 00:18:34.057 "target": "spare", 00:18:34.057 "progress": { 00:18:34.057 "blocks": 19200, 00:18:34.057 "percent": 10 00:18:34.057 } 00:18:34.057 }, 00:18:34.057 "base_bdevs_list": [ 00:18:34.057 { 00:18:34.057 "name": "spare", 00:18:34.057 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:34.057 "is_configured": true, 00:18:34.057 "data_offset": 2048, 00:18:34.057 "data_size": 63488 00:18:34.057 }, 00:18:34.057 { 00:18:34.057 "name": "BaseBdev2", 00:18:34.057 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:34.057 "is_configured": true, 00:18:34.057 "data_offset": 2048, 00:18:34.057 "data_size": 63488 00:18:34.057 }, 00:18:34.057 { 00:18:34.057 "name": "BaseBdev3", 00:18:34.057 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:34.057 "is_configured": true, 00:18:34.057 "data_offset": 2048, 00:18:34.057 "data_size": 63488 00:18:34.057 }, 00:18:34.057 { 00:18:34.057 "name": "BaseBdev4", 00:18:34.057 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:34.057 "is_configured": true, 00:18:34.057 "data_offset": 2048, 00:18:34.057 "data_size": 63488 00:18:34.057 } 00:18:34.057 ] 00:18:34.057 }' 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.057 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:34.058 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=649 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.058 "name": "raid_bdev1", 00:18:34.058 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:34.058 "strip_size_kb": 64, 00:18:34.058 "state": "online", 00:18:34.058 "raid_level": "raid5f", 00:18:34.058 "superblock": true, 00:18:34.058 "num_base_bdevs": 4, 00:18:34.058 "num_base_bdevs_discovered": 4, 00:18:34.058 "num_base_bdevs_operational": 4, 00:18:34.058 "process": { 00:18:34.058 "type": "rebuild", 00:18:34.058 "target": "spare", 00:18:34.058 "progress": { 00:18:34.058 "blocks": 21120, 00:18:34.058 "percent": 11 00:18:34.058 } 00:18:34.058 }, 00:18:34.058 "base_bdevs_list": [ 00:18:34.058 { 00:18:34.058 "name": "spare", 00:18:34.058 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:34.058 "is_configured": true, 00:18:34.058 "data_offset": 2048, 00:18:34.058 "data_size": 63488 00:18:34.058 }, 00:18:34.058 { 00:18:34.058 "name": "BaseBdev2", 00:18:34.058 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:34.058 "is_configured": true, 00:18:34.058 "data_offset": 2048, 00:18:34.058 "data_size": 63488 00:18:34.058 }, 00:18:34.058 { 00:18:34.058 "name": "BaseBdev3", 00:18:34.058 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:34.058 "is_configured": true, 00:18:34.058 "data_offset": 2048, 00:18:34.058 "data_size": 63488 00:18:34.058 }, 00:18:34.058 { 00:18:34.058 "name": "BaseBdev4", 00:18:34.058 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:34.058 "is_configured": true, 00:18:34.058 "data_offset": 2048, 00:18:34.058 "data_size": 63488 00:18:34.058 } 00:18:34.058 ] 00:18:34.058 }' 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.058 09:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.058 09:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.058 09:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.996 "name": "raid_bdev1", 00:18:34.996 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:34.996 "strip_size_kb": 64, 00:18:34.996 "state": "online", 00:18:34.996 "raid_level": "raid5f", 00:18:34.996 "superblock": true, 00:18:34.996 "num_base_bdevs": 4, 00:18:34.996 "num_base_bdevs_discovered": 4, 00:18:34.996 "num_base_bdevs_operational": 4, 00:18:34.996 "process": { 00:18:34.996 "type": "rebuild", 00:18:34.996 "target": "spare", 00:18:34.996 "progress": { 00:18:34.996 "blocks": 42240, 00:18:34.996 "percent": 22 00:18:34.996 } 00:18:34.996 }, 00:18:34.996 "base_bdevs_list": [ 00:18:34.996 { 00:18:34.996 "name": "spare", 00:18:34.996 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:34.996 "is_configured": true, 00:18:34.996 "data_offset": 2048, 00:18:34.996 "data_size": 63488 00:18:34.996 }, 00:18:34.996 { 00:18:34.996 "name": "BaseBdev2", 00:18:34.996 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:34.996 "is_configured": true, 00:18:34.996 "data_offset": 2048, 00:18:34.996 "data_size": 63488 00:18:34.996 }, 00:18:34.996 { 00:18:34.996 "name": "BaseBdev3", 00:18:34.996 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:34.996 "is_configured": true, 00:18:34.996 "data_offset": 2048, 00:18:34.996 "data_size": 63488 00:18:34.996 }, 00:18:34.996 { 00:18:34.996 "name": "BaseBdev4", 00:18:34.996 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:34.996 "is_configured": true, 00:18:34.996 "data_offset": 2048, 00:18:34.996 "data_size": 63488 00:18:34.996 } 00:18:34.996 ] 00:18:34.996 }' 00:18:34.996 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.255 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.255 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.255 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.255 09:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.193 "name": "raid_bdev1", 00:18:36.193 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:36.193 "strip_size_kb": 64, 00:18:36.193 "state": "online", 00:18:36.193 "raid_level": "raid5f", 00:18:36.193 "superblock": true, 00:18:36.193 "num_base_bdevs": 4, 00:18:36.193 "num_base_bdevs_discovered": 4, 00:18:36.193 "num_base_bdevs_operational": 4, 00:18:36.193 "process": { 00:18:36.193 "type": "rebuild", 00:18:36.193 "target": "spare", 00:18:36.193 "progress": { 00:18:36.193 "blocks": 65280, 00:18:36.193 "percent": 34 00:18:36.193 } 00:18:36.193 }, 00:18:36.193 "base_bdevs_list": [ 00:18:36.193 { 00:18:36.193 "name": "spare", 00:18:36.193 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:36.193 "is_configured": true, 00:18:36.193 "data_offset": 2048, 00:18:36.193 "data_size": 63488 00:18:36.193 }, 00:18:36.193 { 00:18:36.193 "name": "BaseBdev2", 00:18:36.193 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:36.193 "is_configured": true, 00:18:36.193 "data_offset": 2048, 00:18:36.193 "data_size": 63488 00:18:36.193 }, 00:18:36.193 { 00:18:36.193 "name": "BaseBdev3", 00:18:36.193 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:36.193 "is_configured": true, 00:18:36.193 "data_offset": 2048, 00:18:36.193 "data_size": 63488 00:18:36.193 }, 00:18:36.193 { 00:18:36.193 "name": "BaseBdev4", 00:18:36.193 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:36.193 "is_configured": true, 00:18:36.193 "data_offset": 2048, 00:18:36.193 "data_size": 63488 00:18:36.193 } 00:18:36.193 ] 00:18:36.193 }' 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.193 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.495 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.495 09:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.466 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.466 "name": "raid_bdev1", 00:18:37.466 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:37.466 "strip_size_kb": 64, 00:18:37.466 "state": "online", 00:18:37.466 "raid_level": "raid5f", 00:18:37.466 "superblock": true, 00:18:37.466 "num_base_bdevs": 4, 00:18:37.466 "num_base_bdevs_discovered": 4, 00:18:37.466 "num_base_bdevs_operational": 4, 00:18:37.466 "process": { 00:18:37.466 "type": "rebuild", 00:18:37.467 "target": "spare", 00:18:37.467 "progress": { 00:18:37.467 "blocks": 86400, 00:18:37.467 "percent": 45 00:18:37.467 } 00:18:37.467 }, 00:18:37.467 "base_bdevs_list": [ 00:18:37.467 { 00:18:37.467 "name": "spare", 00:18:37.467 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:37.467 "is_configured": true, 00:18:37.467 "data_offset": 2048, 00:18:37.467 "data_size": 63488 00:18:37.467 }, 00:18:37.467 { 00:18:37.467 "name": "BaseBdev2", 00:18:37.467 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:37.467 "is_configured": true, 00:18:37.467 "data_offset": 2048, 00:18:37.467 "data_size": 63488 00:18:37.467 }, 00:18:37.467 { 00:18:37.467 "name": "BaseBdev3", 00:18:37.467 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:37.467 "is_configured": true, 00:18:37.467 "data_offset": 2048, 00:18:37.467 "data_size": 63488 00:18:37.467 }, 00:18:37.467 { 00:18:37.467 "name": "BaseBdev4", 00:18:37.467 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:37.467 "is_configured": true, 00:18:37.467 "data_offset": 2048, 00:18:37.467 "data_size": 63488 00:18:37.467 } 00:18:37.467 ] 00:18:37.467 }' 00:18:37.467 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.467 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.467 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.467 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.467 09:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.404 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.663 "name": "raid_bdev1", 00:18:38.663 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:38.663 "strip_size_kb": 64, 00:18:38.663 "state": "online", 00:18:38.663 "raid_level": "raid5f", 00:18:38.663 "superblock": true, 00:18:38.663 "num_base_bdevs": 4, 00:18:38.663 "num_base_bdevs_discovered": 4, 00:18:38.663 "num_base_bdevs_operational": 4, 00:18:38.663 "process": { 00:18:38.663 "type": "rebuild", 00:18:38.663 "target": "spare", 00:18:38.663 "progress": { 00:18:38.663 "blocks": 109440, 00:18:38.663 "percent": 57 00:18:38.663 } 00:18:38.663 }, 00:18:38.663 "base_bdevs_list": [ 00:18:38.663 { 00:18:38.663 "name": "spare", 00:18:38.663 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:38.663 "is_configured": true, 00:18:38.663 "data_offset": 2048, 00:18:38.663 "data_size": 63488 00:18:38.663 }, 00:18:38.663 { 00:18:38.663 "name": "BaseBdev2", 00:18:38.663 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:38.663 "is_configured": true, 00:18:38.663 "data_offset": 2048, 00:18:38.663 "data_size": 63488 00:18:38.663 }, 00:18:38.663 { 00:18:38.663 "name": "BaseBdev3", 00:18:38.663 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:38.663 "is_configured": true, 00:18:38.663 "data_offset": 2048, 00:18:38.663 "data_size": 63488 00:18:38.663 }, 00:18:38.663 { 00:18:38.663 "name": "BaseBdev4", 00:18:38.663 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:38.663 "is_configured": true, 00:18:38.663 "data_offset": 2048, 00:18:38.663 "data_size": 63488 00:18:38.663 } 00:18:38.663 ] 00:18:38.663 }' 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.663 09:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.601 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.602 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.602 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.602 "name": "raid_bdev1", 00:18:39.602 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:39.602 "strip_size_kb": 64, 00:18:39.602 "state": "online", 00:18:39.602 "raid_level": "raid5f", 00:18:39.602 "superblock": true, 00:18:39.602 "num_base_bdevs": 4, 00:18:39.602 "num_base_bdevs_discovered": 4, 00:18:39.602 "num_base_bdevs_operational": 4, 00:18:39.602 "process": { 00:18:39.602 "type": "rebuild", 00:18:39.602 "target": "spare", 00:18:39.602 "progress": { 00:18:39.602 "blocks": 130560, 00:18:39.602 "percent": 68 00:18:39.602 } 00:18:39.602 }, 00:18:39.602 "base_bdevs_list": [ 00:18:39.602 { 00:18:39.602 "name": "spare", 00:18:39.602 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:39.602 "is_configured": true, 00:18:39.602 "data_offset": 2048, 00:18:39.602 "data_size": 63488 00:18:39.602 }, 00:18:39.602 { 00:18:39.602 "name": "BaseBdev2", 00:18:39.602 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:39.602 "is_configured": true, 00:18:39.602 "data_offset": 2048, 00:18:39.602 "data_size": 63488 00:18:39.602 }, 00:18:39.602 { 00:18:39.602 "name": "BaseBdev3", 00:18:39.602 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:39.602 "is_configured": true, 00:18:39.602 "data_offset": 2048, 00:18:39.602 "data_size": 63488 00:18:39.602 }, 00:18:39.602 { 00:18:39.602 "name": "BaseBdev4", 00:18:39.602 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:39.602 "is_configured": true, 00:18:39.602 "data_offset": 2048, 00:18:39.602 "data_size": 63488 00:18:39.602 } 00:18:39.602 ] 00:18:39.602 }' 00:18:39.602 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.861 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:39.861 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.861 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.861 09:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:40.799 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:40.799 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:40.799 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.799 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.800 "name": "raid_bdev1", 00:18:40.800 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:40.800 "strip_size_kb": 64, 00:18:40.800 "state": "online", 00:18:40.800 "raid_level": "raid5f", 00:18:40.800 "superblock": true, 00:18:40.800 "num_base_bdevs": 4, 00:18:40.800 "num_base_bdevs_discovered": 4, 00:18:40.800 "num_base_bdevs_operational": 4, 00:18:40.800 "process": { 00:18:40.800 "type": "rebuild", 00:18:40.800 "target": "spare", 00:18:40.800 "progress": { 00:18:40.800 "blocks": 151680, 00:18:40.800 "percent": 79 00:18:40.800 } 00:18:40.800 }, 00:18:40.800 "base_bdevs_list": [ 00:18:40.800 { 00:18:40.800 "name": "spare", 00:18:40.800 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:40.800 "is_configured": true, 00:18:40.800 "data_offset": 2048, 00:18:40.800 "data_size": 63488 00:18:40.800 }, 00:18:40.800 { 00:18:40.800 "name": "BaseBdev2", 00:18:40.800 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:40.800 "is_configured": true, 00:18:40.800 "data_offset": 2048, 00:18:40.800 "data_size": 63488 00:18:40.800 }, 00:18:40.800 { 00:18:40.800 "name": "BaseBdev3", 00:18:40.800 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:40.800 "is_configured": true, 00:18:40.800 "data_offset": 2048, 00:18:40.800 "data_size": 63488 00:18:40.800 }, 00:18:40.800 { 00:18:40.800 "name": "BaseBdev4", 00:18:40.800 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:40.800 "is_configured": true, 00:18:40.800 "data_offset": 2048, 00:18:40.800 "data_size": 63488 00:18:40.800 } 00:18:40.800 ] 00:18:40.800 }' 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:40.800 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.059 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.059 09:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.998 09:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.998 09:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.998 "name": "raid_bdev1", 00:18:41.998 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:41.998 "strip_size_kb": 64, 00:18:41.998 "state": "online", 00:18:41.998 "raid_level": "raid5f", 00:18:41.998 "superblock": true, 00:18:41.998 "num_base_bdevs": 4, 00:18:41.998 "num_base_bdevs_discovered": 4, 00:18:41.998 "num_base_bdevs_operational": 4, 00:18:41.998 "process": { 00:18:41.998 "type": "rebuild", 00:18:41.998 "target": "spare", 00:18:41.998 "progress": { 00:18:41.998 "blocks": 174720, 00:18:41.998 "percent": 91 00:18:41.998 } 00:18:41.998 }, 00:18:41.998 "base_bdevs_list": [ 00:18:41.998 { 00:18:41.998 "name": "spare", 00:18:41.998 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:41.998 "is_configured": true, 00:18:41.998 "data_offset": 2048, 00:18:41.998 "data_size": 63488 00:18:41.998 }, 00:18:41.998 { 00:18:41.998 "name": "BaseBdev2", 00:18:41.998 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:41.998 "is_configured": true, 00:18:41.998 "data_offset": 2048, 00:18:41.998 "data_size": 63488 00:18:41.998 }, 00:18:41.998 { 00:18:41.998 "name": "BaseBdev3", 00:18:41.998 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:41.998 "is_configured": true, 00:18:41.998 "data_offset": 2048, 00:18:41.998 "data_size": 63488 00:18:41.998 }, 00:18:41.998 { 00:18:41.998 "name": "BaseBdev4", 00:18:41.998 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:41.998 "is_configured": true, 00:18:41.998 "data_offset": 2048, 00:18:41.998 "data_size": 63488 00:18:41.998 } 00:18:41.999 ] 00:18:41.999 }' 00:18:41.999 09:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.999 09:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.999 09:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.999 09:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.999 09:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:42.937 [2024-11-27 09:55:43.861966] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:42.937 [2024-11-27 09:55:43.862095] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:42.937 [2024-11-27 09:55:43.862285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.196 "name": "raid_bdev1", 00:18:43.196 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:43.196 "strip_size_kb": 64, 00:18:43.196 "state": "online", 00:18:43.196 "raid_level": "raid5f", 00:18:43.196 "superblock": true, 00:18:43.196 "num_base_bdevs": 4, 00:18:43.196 "num_base_bdevs_discovered": 4, 00:18:43.196 "num_base_bdevs_operational": 4, 00:18:43.196 "base_bdevs_list": [ 00:18:43.196 { 00:18:43.196 "name": "spare", 00:18:43.196 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:43.196 "is_configured": true, 00:18:43.196 "data_offset": 2048, 00:18:43.196 "data_size": 63488 00:18:43.196 }, 00:18:43.196 { 00:18:43.196 "name": "BaseBdev2", 00:18:43.196 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:43.196 "is_configured": true, 00:18:43.196 "data_offset": 2048, 00:18:43.196 "data_size": 63488 00:18:43.196 }, 00:18:43.196 { 00:18:43.196 "name": "BaseBdev3", 00:18:43.196 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:43.196 "is_configured": true, 00:18:43.196 "data_offset": 2048, 00:18:43.196 "data_size": 63488 00:18:43.196 }, 00:18:43.196 { 00:18:43.196 "name": "BaseBdev4", 00:18:43.196 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:43.196 "is_configured": true, 00:18:43.196 "data_offset": 2048, 00:18:43.196 "data_size": 63488 00:18:43.196 } 00:18:43.196 ] 00:18:43.196 }' 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.196 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.197 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.197 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.456 "name": "raid_bdev1", 00:18:43.456 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:43.456 "strip_size_kb": 64, 00:18:43.456 "state": "online", 00:18:43.456 "raid_level": "raid5f", 00:18:43.456 "superblock": true, 00:18:43.456 "num_base_bdevs": 4, 00:18:43.456 "num_base_bdevs_discovered": 4, 00:18:43.456 "num_base_bdevs_operational": 4, 00:18:43.456 "base_bdevs_list": [ 00:18:43.456 { 00:18:43.456 "name": "spare", 00:18:43.456 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:43.456 "is_configured": true, 00:18:43.456 "data_offset": 2048, 00:18:43.456 "data_size": 63488 00:18:43.456 }, 00:18:43.456 { 00:18:43.456 "name": "BaseBdev2", 00:18:43.456 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:43.456 "is_configured": true, 00:18:43.456 "data_offset": 2048, 00:18:43.456 "data_size": 63488 00:18:43.456 }, 00:18:43.456 { 00:18:43.456 "name": "BaseBdev3", 00:18:43.456 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:43.456 "is_configured": true, 00:18:43.456 "data_offset": 2048, 00:18:43.456 "data_size": 63488 00:18:43.456 }, 00:18:43.456 { 00:18:43.456 "name": "BaseBdev4", 00:18:43.456 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:43.456 "is_configured": true, 00:18:43.456 "data_offset": 2048, 00:18:43.456 "data_size": 63488 00:18:43.456 } 00:18:43.456 ] 00:18:43.456 }' 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:43.456 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.457 "name": "raid_bdev1", 00:18:43.457 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:43.457 "strip_size_kb": 64, 00:18:43.457 "state": "online", 00:18:43.457 "raid_level": "raid5f", 00:18:43.457 "superblock": true, 00:18:43.457 "num_base_bdevs": 4, 00:18:43.457 "num_base_bdevs_discovered": 4, 00:18:43.457 "num_base_bdevs_operational": 4, 00:18:43.457 "base_bdevs_list": [ 00:18:43.457 { 00:18:43.457 "name": "spare", 00:18:43.457 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:43.457 "is_configured": true, 00:18:43.457 "data_offset": 2048, 00:18:43.457 "data_size": 63488 00:18:43.457 }, 00:18:43.457 { 00:18:43.457 "name": "BaseBdev2", 00:18:43.457 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:43.457 "is_configured": true, 00:18:43.457 "data_offset": 2048, 00:18:43.457 "data_size": 63488 00:18:43.457 }, 00:18:43.457 { 00:18:43.457 "name": "BaseBdev3", 00:18:43.457 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:43.457 "is_configured": true, 00:18:43.457 "data_offset": 2048, 00:18:43.457 "data_size": 63488 00:18:43.457 }, 00:18:43.457 { 00:18:43.457 "name": "BaseBdev4", 00:18:43.457 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:43.457 "is_configured": true, 00:18:43.457 "data_offset": 2048, 00:18:43.457 "data_size": 63488 00:18:43.457 } 00:18:43.457 ] 00:18:43.457 }' 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.457 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.025 [2024-11-27 09:55:44.896334] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.025 [2024-11-27 09:55:44.896483] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:44.025 [2024-11-27 09:55:44.896662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.025 [2024-11-27 09:55:44.896827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.025 [2024-11-27 09:55:44.896913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:44.025 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:44.026 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:44.026 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:44.026 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:44.026 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:44.026 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.026 09:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:44.284 /dev/nbd0 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.284 1+0 records in 00:18:44.284 1+0 records out 00:18:44.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662358 s, 6.2 MB/s 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.284 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:44.543 /dev/nbd1 00:18:44.543 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:44.543 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:44.543 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:44.543 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:44.543 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:44.544 1+0 records in 00:18:44.544 1+0 records out 00:18:44.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000513617 s, 8.0 MB/s 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:44.544 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:44.803 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:45.062 09:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.062 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.322 [2024-11-27 09:55:46.194535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.322 [2024-11-27 09:55:46.194619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.322 [2024-11-27 09:55:46.194647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:45.322 [2024-11-27 09:55:46.194657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.322 [2024-11-27 09:55:46.197610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.322 [2024-11-27 09:55:46.197657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.322 [2024-11-27 09:55:46.197788] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:45.322 [2024-11-27 09:55:46.197847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:45.322 [2024-11-27 09:55:46.198021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.322 [2024-11-27 09:55:46.198128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.322 [2024-11-27 09:55:46.198207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:45.322 spare 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.322 [2024-11-27 09:55:46.298161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:45.322 [2024-11-27 09:55:46.298279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:45.322 [2024-11-27 09:55:46.298716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:18:45.322 [2024-11-27 09:55:46.306752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:45.322 [2024-11-27 09:55:46.306793] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:45.322 [2024-11-27 09:55:46.307117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.322 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.323 "name": "raid_bdev1", 00:18:45.323 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:45.323 "strip_size_kb": 64, 00:18:45.323 "state": "online", 00:18:45.323 "raid_level": "raid5f", 00:18:45.323 "superblock": true, 00:18:45.323 "num_base_bdevs": 4, 00:18:45.323 "num_base_bdevs_discovered": 4, 00:18:45.323 "num_base_bdevs_operational": 4, 00:18:45.323 "base_bdevs_list": [ 00:18:45.323 { 00:18:45.323 "name": "spare", 00:18:45.323 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:45.323 "is_configured": true, 00:18:45.323 "data_offset": 2048, 00:18:45.323 "data_size": 63488 00:18:45.323 }, 00:18:45.323 { 00:18:45.323 "name": "BaseBdev2", 00:18:45.323 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:45.323 "is_configured": true, 00:18:45.323 "data_offset": 2048, 00:18:45.323 "data_size": 63488 00:18:45.323 }, 00:18:45.323 { 00:18:45.323 "name": "BaseBdev3", 00:18:45.323 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:45.323 "is_configured": true, 00:18:45.323 "data_offset": 2048, 00:18:45.323 "data_size": 63488 00:18:45.323 }, 00:18:45.323 { 00:18:45.323 "name": "BaseBdev4", 00:18:45.323 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:45.323 "is_configured": true, 00:18:45.323 "data_offset": 2048, 00:18:45.323 "data_size": 63488 00:18:45.323 } 00:18:45.323 ] 00:18:45.323 }' 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.323 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.892 "name": "raid_bdev1", 00:18:45.892 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:45.892 "strip_size_kb": 64, 00:18:45.892 "state": "online", 00:18:45.892 "raid_level": "raid5f", 00:18:45.892 "superblock": true, 00:18:45.892 "num_base_bdevs": 4, 00:18:45.892 "num_base_bdevs_discovered": 4, 00:18:45.892 "num_base_bdevs_operational": 4, 00:18:45.892 "base_bdevs_list": [ 00:18:45.892 { 00:18:45.892 "name": "spare", 00:18:45.892 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:45.892 "is_configured": true, 00:18:45.892 "data_offset": 2048, 00:18:45.892 "data_size": 63488 00:18:45.892 }, 00:18:45.892 { 00:18:45.892 "name": "BaseBdev2", 00:18:45.892 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:45.892 "is_configured": true, 00:18:45.892 "data_offset": 2048, 00:18:45.892 "data_size": 63488 00:18:45.892 }, 00:18:45.892 { 00:18:45.892 "name": "BaseBdev3", 00:18:45.892 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:45.892 "is_configured": true, 00:18:45.892 "data_offset": 2048, 00:18:45.892 "data_size": 63488 00:18:45.892 }, 00:18:45.892 { 00:18:45.892 "name": "BaseBdev4", 00:18:45.892 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:45.892 "is_configured": true, 00:18:45.892 "data_offset": 2048, 00:18:45.892 "data_size": 63488 00:18:45.892 } 00:18:45.892 ] 00:18:45.892 }' 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.892 [2024-11-27 09:55:46.964396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.892 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.893 09:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.893 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.893 "name": "raid_bdev1", 00:18:45.893 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:45.893 "strip_size_kb": 64, 00:18:45.893 "state": "online", 00:18:45.893 "raid_level": "raid5f", 00:18:45.893 "superblock": true, 00:18:45.893 "num_base_bdevs": 4, 00:18:45.893 "num_base_bdevs_discovered": 3, 00:18:45.893 "num_base_bdevs_operational": 3, 00:18:45.893 "base_bdevs_list": [ 00:18:45.893 { 00:18:45.893 "name": null, 00:18:45.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.893 "is_configured": false, 00:18:45.893 "data_offset": 0, 00:18:45.893 "data_size": 63488 00:18:45.893 }, 00:18:45.893 { 00:18:45.893 "name": "BaseBdev2", 00:18:45.893 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:45.893 "is_configured": true, 00:18:45.893 "data_offset": 2048, 00:18:45.893 "data_size": 63488 00:18:45.893 }, 00:18:45.893 { 00:18:45.893 "name": "BaseBdev3", 00:18:45.893 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:45.893 "is_configured": true, 00:18:45.893 "data_offset": 2048, 00:18:45.893 "data_size": 63488 00:18:45.893 }, 00:18:45.893 { 00:18:45.893 "name": "BaseBdev4", 00:18:45.893 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:45.893 "is_configured": true, 00:18:45.893 "data_offset": 2048, 00:18:45.893 "data_size": 63488 00:18:45.893 } 00:18:45.893 ] 00:18:45.893 }' 00:18:45.893 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.893 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.461 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:46.461 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.461 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.461 [2024-11-27 09:55:47.367748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.461 [2024-11-27 09:55:47.368157] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:46.461 [2024-11-27 09:55:47.368188] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:46.461 [2024-11-27 09:55:47.368244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.461 [2024-11-27 09:55:47.384213] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:18:46.461 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.461 09:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:46.461 [2024-11-27 09:55:47.394598] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.399 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.399 "name": "raid_bdev1", 00:18:47.399 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:47.399 "strip_size_kb": 64, 00:18:47.399 "state": "online", 00:18:47.399 "raid_level": "raid5f", 00:18:47.399 "superblock": true, 00:18:47.399 "num_base_bdevs": 4, 00:18:47.399 "num_base_bdevs_discovered": 4, 00:18:47.399 "num_base_bdevs_operational": 4, 00:18:47.399 "process": { 00:18:47.399 "type": "rebuild", 00:18:47.399 "target": "spare", 00:18:47.399 "progress": { 00:18:47.399 "blocks": 19200, 00:18:47.399 "percent": 10 00:18:47.399 } 00:18:47.399 }, 00:18:47.399 "base_bdevs_list": [ 00:18:47.399 { 00:18:47.399 "name": "spare", 00:18:47.399 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:47.399 "is_configured": true, 00:18:47.399 "data_offset": 2048, 00:18:47.399 "data_size": 63488 00:18:47.399 }, 00:18:47.399 { 00:18:47.399 "name": "BaseBdev2", 00:18:47.399 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:47.400 "is_configured": true, 00:18:47.400 "data_offset": 2048, 00:18:47.400 "data_size": 63488 00:18:47.400 }, 00:18:47.400 { 00:18:47.400 "name": "BaseBdev3", 00:18:47.400 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:47.400 "is_configured": true, 00:18:47.400 "data_offset": 2048, 00:18:47.400 "data_size": 63488 00:18:47.400 }, 00:18:47.400 { 00:18:47.400 "name": "BaseBdev4", 00:18:47.400 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:47.400 "is_configured": true, 00:18:47.400 "data_offset": 2048, 00:18:47.400 "data_size": 63488 00:18:47.400 } 00:18:47.400 ] 00:18:47.400 }' 00:18:47.400 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.400 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.400 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.659 [2024-11-27 09:55:48.555672] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.659 [2024-11-27 09:55:48.607701] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.659 [2024-11-27 09:55:48.607839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.659 [2024-11-27 09:55:48.607859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.659 [2024-11-27 09:55:48.607871] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.659 "name": "raid_bdev1", 00:18:47.659 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:47.659 "strip_size_kb": 64, 00:18:47.659 "state": "online", 00:18:47.659 "raid_level": "raid5f", 00:18:47.659 "superblock": true, 00:18:47.659 "num_base_bdevs": 4, 00:18:47.659 "num_base_bdevs_discovered": 3, 00:18:47.659 "num_base_bdevs_operational": 3, 00:18:47.659 "base_bdevs_list": [ 00:18:47.659 { 00:18:47.659 "name": null, 00:18:47.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.659 "is_configured": false, 00:18:47.659 "data_offset": 0, 00:18:47.659 "data_size": 63488 00:18:47.659 }, 00:18:47.659 { 00:18:47.659 "name": "BaseBdev2", 00:18:47.659 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:47.659 "is_configured": true, 00:18:47.659 "data_offset": 2048, 00:18:47.659 "data_size": 63488 00:18:47.659 }, 00:18:47.659 { 00:18:47.659 "name": "BaseBdev3", 00:18:47.659 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:47.659 "is_configured": true, 00:18:47.659 "data_offset": 2048, 00:18:47.659 "data_size": 63488 00:18:47.659 }, 00:18:47.659 { 00:18:47.659 "name": "BaseBdev4", 00:18:47.659 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:47.659 "is_configured": true, 00:18:47.659 "data_offset": 2048, 00:18:47.659 "data_size": 63488 00:18:47.659 } 00:18:47.659 ] 00:18:47.659 }' 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.659 09:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.236 09:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:48.236 09:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.236 09:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.236 [2024-11-27 09:55:49.123061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:48.236 [2024-11-27 09:55:49.123281] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.236 [2024-11-27 09:55:49.123340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:48.236 [2024-11-27 09:55:49.123379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.236 [2024-11-27 09:55:49.124093] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.236 [2024-11-27 09:55:49.124178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:48.236 [2024-11-27 09:55:49.124348] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:48.236 [2024-11-27 09:55:49.124401] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:48.236 [2024-11-27 09:55:49.124450] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:48.236 [2024-11-27 09:55:49.124522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.236 [2024-11-27 09:55:49.141686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:48.236 spare 00:18:48.236 09:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.236 09:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:48.236 [2024-11-27 09:55:49.152009] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.252 "name": "raid_bdev1", 00:18:49.252 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:49.252 "strip_size_kb": 64, 00:18:49.252 "state": "online", 00:18:49.252 "raid_level": "raid5f", 00:18:49.252 "superblock": true, 00:18:49.252 "num_base_bdevs": 4, 00:18:49.252 "num_base_bdevs_discovered": 4, 00:18:49.252 "num_base_bdevs_operational": 4, 00:18:49.252 "process": { 00:18:49.252 "type": "rebuild", 00:18:49.252 "target": "spare", 00:18:49.252 "progress": { 00:18:49.252 "blocks": 19200, 00:18:49.252 "percent": 10 00:18:49.252 } 00:18:49.252 }, 00:18:49.252 "base_bdevs_list": [ 00:18:49.252 { 00:18:49.252 "name": "spare", 00:18:49.252 "uuid": "9d41be3f-a0d9-549c-a15d-db5546b288cb", 00:18:49.252 "is_configured": true, 00:18:49.252 "data_offset": 2048, 00:18:49.252 "data_size": 63488 00:18:49.252 }, 00:18:49.252 { 00:18:49.252 "name": "BaseBdev2", 00:18:49.252 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:49.252 "is_configured": true, 00:18:49.252 "data_offset": 2048, 00:18:49.252 "data_size": 63488 00:18:49.252 }, 00:18:49.252 { 00:18:49.252 "name": "BaseBdev3", 00:18:49.252 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:49.252 "is_configured": true, 00:18:49.252 "data_offset": 2048, 00:18:49.252 "data_size": 63488 00:18:49.252 }, 00:18:49.252 { 00:18:49.252 "name": "BaseBdev4", 00:18:49.252 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:49.252 "is_configured": true, 00:18:49.252 "data_offset": 2048, 00:18:49.252 "data_size": 63488 00:18:49.252 } 00:18:49.252 ] 00:18:49.252 }' 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.252 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.252 [2024-11-27 09:55:50.304990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.252 [2024-11-27 09:55:50.365050] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.252 [2024-11-27 09:55:50.365286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.252 [2024-11-27 09:55:50.365333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.252 [2024-11-27 09:55:50.365375] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.512 "name": "raid_bdev1", 00:18:49.512 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:49.512 "strip_size_kb": 64, 00:18:49.512 "state": "online", 00:18:49.512 "raid_level": "raid5f", 00:18:49.512 "superblock": true, 00:18:49.512 "num_base_bdevs": 4, 00:18:49.512 "num_base_bdevs_discovered": 3, 00:18:49.512 "num_base_bdevs_operational": 3, 00:18:49.512 "base_bdevs_list": [ 00:18:49.512 { 00:18:49.512 "name": null, 00:18:49.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.512 "is_configured": false, 00:18:49.512 "data_offset": 0, 00:18:49.512 "data_size": 63488 00:18:49.512 }, 00:18:49.512 { 00:18:49.512 "name": "BaseBdev2", 00:18:49.512 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:49.512 "is_configured": true, 00:18:49.512 "data_offset": 2048, 00:18:49.512 "data_size": 63488 00:18:49.512 }, 00:18:49.512 { 00:18:49.512 "name": "BaseBdev3", 00:18:49.512 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:49.512 "is_configured": true, 00:18:49.512 "data_offset": 2048, 00:18:49.512 "data_size": 63488 00:18:49.512 }, 00:18:49.512 { 00:18:49.512 "name": "BaseBdev4", 00:18:49.512 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:49.512 "is_configured": true, 00:18:49.512 "data_offset": 2048, 00:18:49.512 "data_size": 63488 00:18:49.512 } 00:18:49.512 ] 00:18:49.512 }' 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.512 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.771 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.771 "name": "raid_bdev1", 00:18:49.771 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:49.771 "strip_size_kb": 64, 00:18:49.771 "state": "online", 00:18:49.771 "raid_level": "raid5f", 00:18:49.771 "superblock": true, 00:18:49.771 "num_base_bdevs": 4, 00:18:49.771 "num_base_bdevs_discovered": 3, 00:18:49.771 "num_base_bdevs_operational": 3, 00:18:49.771 "base_bdevs_list": [ 00:18:49.771 { 00:18:49.771 "name": null, 00:18:49.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.771 "is_configured": false, 00:18:49.771 "data_offset": 0, 00:18:49.771 "data_size": 63488 00:18:49.771 }, 00:18:49.771 { 00:18:49.771 "name": "BaseBdev2", 00:18:49.771 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:49.771 "is_configured": true, 00:18:49.771 "data_offset": 2048, 00:18:49.771 "data_size": 63488 00:18:49.771 }, 00:18:49.771 { 00:18:49.771 "name": "BaseBdev3", 00:18:49.772 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:49.772 "is_configured": true, 00:18:49.772 "data_offset": 2048, 00:18:49.772 "data_size": 63488 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "name": "BaseBdev4", 00:18:49.772 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:49.772 "is_configured": true, 00:18:49.772 "data_offset": 2048, 00:18:49.772 "data_size": 63488 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 }' 00:18:49.772 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.030 [2024-11-27 09:55:50.988368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:50.030 [2024-11-27 09:55:50.988460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.030 [2024-11-27 09:55:50.988490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:50.030 [2024-11-27 09:55:50.988499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.030 [2024-11-27 09:55:50.989138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.030 [2024-11-27 09:55:50.989162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:50.030 [2024-11-27 09:55:50.989277] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:50.030 [2024-11-27 09:55:50.989296] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:50.030 [2024-11-27 09:55:50.989308] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:50.030 [2024-11-27 09:55:50.989322] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:50.030 BaseBdev1 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.030 09:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:50.968 09:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:50.968 09:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.968 09:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.968 09:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:50.968 09:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.968 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.968 "name": "raid_bdev1", 00:18:50.969 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:50.969 "strip_size_kb": 64, 00:18:50.969 "state": "online", 00:18:50.969 "raid_level": "raid5f", 00:18:50.969 "superblock": true, 00:18:50.969 "num_base_bdevs": 4, 00:18:50.969 "num_base_bdevs_discovered": 3, 00:18:50.969 "num_base_bdevs_operational": 3, 00:18:50.969 "base_bdevs_list": [ 00:18:50.969 { 00:18:50.969 "name": null, 00:18:50.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.969 "is_configured": false, 00:18:50.969 "data_offset": 0, 00:18:50.969 "data_size": 63488 00:18:50.969 }, 00:18:50.969 { 00:18:50.969 "name": "BaseBdev2", 00:18:50.969 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:50.969 "is_configured": true, 00:18:50.969 "data_offset": 2048, 00:18:50.969 "data_size": 63488 00:18:50.969 }, 00:18:50.969 { 00:18:50.969 "name": "BaseBdev3", 00:18:50.969 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:50.969 "is_configured": true, 00:18:50.969 "data_offset": 2048, 00:18:50.969 "data_size": 63488 00:18:50.969 }, 00:18:50.969 { 00:18:50.969 "name": "BaseBdev4", 00:18:50.969 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:50.969 "is_configured": true, 00:18:50.969 "data_offset": 2048, 00:18:50.969 "data_size": 63488 00:18:50.969 } 00:18:50.969 ] 00:18:50.969 }' 00:18:50.969 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.969 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.538 "name": "raid_bdev1", 00:18:51.538 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:51.538 "strip_size_kb": 64, 00:18:51.538 "state": "online", 00:18:51.538 "raid_level": "raid5f", 00:18:51.538 "superblock": true, 00:18:51.538 "num_base_bdevs": 4, 00:18:51.538 "num_base_bdevs_discovered": 3, 00:18:51.538 "num_base_bdevs_operational": 3, 00:18:51.538 "base_bdevs_list": [ 00:18:51.538 { 00:18:51.538 "name": null, 00:18:51.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.538 "is_configured": false, 00:18:51.538 "data_offset": 0, 00:18:51.538 "data_size": 63488 00:18:51.538 }, 00:18:51.538 { 00:18:51.538 "name": "BaseBdev2", 00:18:51.538 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:51.538 "is_configured": true, 00:18:51.538 "data_offset": 2048, 00:18:51.538 "data_size": 63488 00:18:51.538 }, 00:18:51.538 { 00:18:51.538 "name": "BaseBdev3", 00:18:51.538 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:51.538 "is_configured": true, 00:18:51.538 "data_offset": 2048, 00:18:51.538 "data_size": 63488 00:18:51.538 }, 00:18:51.538 { 00:18:51.538 "name": "BaseBdev4", 00:18:51.538 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:51.538 "is_configured": true, 00:18:51.538 "data_offset": 2048, 00:18:51.538 "data_size": 63488 00:18:51.538 } 00:18:51.538 ] 00:18:51.538 }' 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:51.538 [2024-11-27 09:55:52.601702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.538 [2024-11-27 09:55:52.602040] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:51.538 [2024-11-27 09:55:52.602115] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:51.538 request: 00:18:51.538 { 00:18:51.538 "base_bdev": "BaseBdev1", 00:18:51.538 "raid_bdev": "raid_bdev1", 00:18:51.538 "method": "bdev_raid_add_base_bdev", 00:18:51.538 "req_id": 1 00:18:51.538 } 00:18:51.538 Got JSON-RPC error response 00:18:51.538 response: 00:18:51.538 { 00:18:51.538 "code": -22, 00:18:51.538 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:51.538 } 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.538 09:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.917 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.918 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.918 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.918 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.918 "name": "raid_bdev1", 00:18:52.918 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:52.918 "strip_size_kb": 64, 00:18:52.918 "state": "online", 00:18:52.918 "raid_level": "raid5f", 00:18:52.918 "superblock": true, 00:18:52.918 "num_base_bdevs": 4, 00:18:52.918 "num_base_bdevs_discovered": 3, 00:18:52.918 "num_base_bdevs_operational": 3, 00:18:52.918 "base_bdevs_list": [ 00:18:52.918 { 00:18:52.918 "name": null, 00:18:52.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.918 "is_configured": false, 00:18:52.918 "data_offset": 0, 00:18:52.918 "data_size": 63488 00:18:52.918 }, 00:18:52.918 { 00:18:52.918 "name": "BaseBdev2", 00:18:52.918 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:52.918 "is_configured": true, 00:18:52.918 "data_offset": 2048, 00:18:52.918 "data_size": 63488 00:18:52.918 }, 00:18:52.918 { 00:18:52.918 "name": "BaseBdev3", 00:18:52.918 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:52.918 "is_configured": true, 00:18:52.918 "data_offset": 2048, 00:18:52.918 "data_size": 63488 00:18:52.918 }, 00:18:52.918 { 00:18:52.918 "name": "BaseBdev4", 00:18:52.918 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:52.918 "is_configured": true, 00:18:52.918 "data_offset": 2048, 00:18:52.918 "data_size": 63488 00:18:52.918 } 00:18:52.918 ] 00:18:52.918 }' 00:18:52.918 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.918 09:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.178 "name": "raid_bdev1", 00:18:53.178 "uuid": "cf0fddad-2e38-4041-9693-10f4e1223679", 00:18:53.178 "strip_size_kb": 64, 00:18:53.178 "state": "online", 00:18:53.178 "raid_level": "raid5f", 00:18:53.178 "superblock": true, 00:18:53.178 "num_base_bdevs": 4, 00:18:53.178 "num_base_bdevs_discovered": 3, 00:18:53.178 "num_base_bdevs_operational": 3, 00:18:53.178 "base_bdevs_list": [ 00:18:53.178 { 00:18:53.178 "name": null, 00:18:53.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.178 "is_configured": false, 00:18:53.178 "data_offset": 0, 00:18:53.178 "data_size": 63488 00:18:53.178 }, 00:18:53.178 { 00:18:53.178 "name": "BaseBdev2", 00:18:53.178 "uuid": "401b7a58-d4eb-565d-a901-5942050af9d3", 00:18:53.178 "is_configured": true, 00:18:53.178 "data_offset": 2048, 00:18:53.178 "data_size": 63488 00:18:53.178 }, 00:18:53.178 { 00:18:53.178 "name": "BaseBdev3", 00:18:53.178 "uuid": "e6fdf9a1-006b-5166-809f-f898440b71fc", 00:18:53.178 "is_configured": true, 00:18:53.178 "data_offset": 2048, 00:18:53.178 "data_size": 63488 00:18:53.178 }, 00:18:53.178 { 00:18:53.178 "name": "BaseBdev4", 00:18:53.178 "uuid": "4875b6ab-c30d-555a-9794-c4cbf46f1580", 00:18:53.178 "is_configured": true, 00:18:53.178 "data_offset": 2048, 00:18:53.178 "data_size": 63488 00:18:53.178 } 00:18:53.178 ] 00:18:53.178 }' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85439 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85439 ']' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85439 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85439 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85439' 00:18:53.178 killing process with pid 85439 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85439 00:18:53.178 Received shutdown signal, test time was about 60.000000 seconds 00:18:53.178 00:18:53.178 Latency(us) 00:18:53.178 [2024-11-27T09:55:54.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.178 [2024-11-27T09:55:54.311Z] =================================================================================================================== 00:18:53.178 [2024-11-27T09:55:54.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:53.178 [2024-11-27 09:55:54.274744] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:53.178 09:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85439 00:18:53.178 [2024-11-27 09:55:54.274940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:53.178 [2024-11-27 09:55:54.275056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:53.178 [2024-11-27 09:55:54.275079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:53.747 [2024-11-27 09:55:54.814010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:55.125 09:55:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:55.125 00:18:55.125 real 0m27.529s 00:18:55.125 user 0m34.273s 00:18:55.125 sys 0m3.420s 00:18:55.125 09:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.125 09:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:55.125 ************************************ 00:18:55.125 END TEST raid5f_rebuild_test_sb 00:18:55.125 ************************************ 00:18:55.125 09:55:56 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:55.125 09:55:56 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:55.125 09:55:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:55.125 09:55:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.125 09:55:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.125 ************************************ 00:18:55.125 START TEST raid_state_function_test_sb_4k 00:18:55.125 ************************************ 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:55.125 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86256 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86256' 00:18:55.126 Process raid pid: 86256 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86256 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86256 ']' 00:18:55.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.126 09:55:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:55.126 [2024-11-27 09:55:56.228612] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:18:55.126 [2024-11-27 09:55:56.228781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.385 [2024-11-27 09:55:56.390764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.644 [2024-11-27 09:55:56.538024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.904 [2024-11-27 09:55:56.788227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:55.904 [2024-11-27 09:55:56.788293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.163 [2024-11-27 09:55:57.086776] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.163 [2024-11-27 09:55:57.086856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.163 [2024-11-27 09:55:57.086867] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.163 [2024-11-27 09:55:57.086894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.163 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.164 "name": "Existed_Raid", 00:18:56.164 "uuid": "2758e8de-2797-475e-a357-9e7a74b1c65f", 00:18:56.164 "strip_size_kb": 0, 00:18:56.164 "state": "configuring", 00:18:56.164 "raid_level": "raid1", 00:18:56.164 "superblock": true, 00:18:56.164 "num_base_bdevs": 2, 00:18:56.164 "num_base_bdevs_discovered": 0, 00:18:56.164 "num_base_bdevs_operational": 2, 00:18:56.164 "base_bdevs_list": [ 00:18:56.164 { 00:18:56.164 "name": "BaseBdev1", 00:18:56.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.164 "is_configured": false, 00:18:56.164 "data_offset": 0, 00:18:56.164 "data_size": 0 00:18:56.164 }, 00:18:56.164 { 00:18:56.164 "name": "BaseBdev2", 00:18:56.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.164 "is_configured": false, 00:18:56.164 "data_offset": 0, 00:18:56.164 "data_size": 0 00:18:56.164 } 00:18:56.164 ] 00:18:56.164 }' 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.164 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.423 [2024-11-27 09:55:57.501963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.423 [2024-11-27 09:55:57.502112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.423 [2024-11-27 09:55:57.513956] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:56.423 [2024-11-27 09:55:57.514124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:56.423 [2024-11-27 09:55:57.514157] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:56.423 [2024-11-27 09:55:57.514184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.423 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.683 [2024-11-27 09:55:57.566565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:56.683 BaseBdev1 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.683 [ 00:18:56.683 { 00:18:56.683 "name": "BaseBdev1", 00:18:56.683 "aliases": [ 00:18:56.683 "7cf1c782-2228-4365-a34e-6d45fbb7f610" 00:18:56.683 ], 00:18:56.683 "product_name": "Malloc disk", 00:18:56.683 "block_size": 4096, 00:18:56.683 "num_blocks": 8192, 00:18:56.683 "uuid": "7cf1c782-2228-4365-a34e-6d45fbb7f610", 00:18:56.683 "assigned_rate_limits": { 00:18:56.683 "rw_ios_per_sec": 0, 00:18:56.683 "rw_mbytes_per_sec": 0, 00:18:56.683 "r_mbytes_per_sec": 0, 00:18:56.683 "w_mbytes_per_sec": 0 00:18:56.683 }, 00:18:56.683 "claimed": true, 00:18:56.683 "claim_type": "exclusive_write", 00:18:56.683 "zoned": false, 00:18:56.683 "supported_io_types": { 00:18:56.683 "read": true, 00:18:56.683 "write": true, 00:18:56.683 "unmap": true, 00:18:56.683 "flush": true, 00:18:56.683 "reset": true, 00:18:56.683 "nvme_admin": false, 00:18:56.683 "nvme_io": false, 00:18:56.683 "nvme_io_md": false, 00:18:56.683 "write_zeroes": true, 00:18:56.683 "zcopy": true, 00:18:56.683 "get_zone_info": false, 00:18:56.683 "zone_management": false, 00:18:56.683 "zone_append": false, 00:18:56.683 "compare": false, 00:18:56.683 "compare_and_write": false, 00:18:56.683 "abort": true, 00:18:56.683 "seek_hole": false, 00:18:56.683 "seek_data": false, 00:18:56.683 "copy": true, 00:18:56.683 "nvme_iov_md": false 00:18:56.683 }, 00:18:56.683 "memory_domains": [ 00:18:56.683 { 00:18:56.683 "dma_device_id": "system", 00:18:56.683 "dma_device_type": 1 00:18:56.683 }, 00:18:56.683 { 00:18:56.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.683 "dma_device_type": 2 00:18:56.683 } 00:18:56.683 ], 00:18:56.683 "driver_specific": {} 00:18:56.683 } 00:18:56.683 ] 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.683 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.683 "name": "Existed_Raid", 00:18:56.683 "uuid": "5ad89b5c-377c-4935-8637-ac6c2c0447ed", 00:18:56.683 "strip_size_kb": 0, 00:18:56.683 "state": "configuring", 00:18:56.683 "raid_level": "raid1", 00:18:56.683 "superblock": true, 00:18:56.683 "num_base_bdevs": 2, 00:18:56.683 "num_base_bdevs_discovered": 1, 00:18:56.683 "num_base_bdevs_operational": 2, 00:18:56.683 "base_bdevs_list": [ 00:18:56.683 { 00:18:56.683 "name": "BaseBdev1", 00:18:56.683 "uuid": "7cf1c782-2228-4365-a34e-6d45fbb7f610", 00:18:56.683 "is_configured": true, 00:18:56.684 "data_offset": 256, 00:18:56.684 "data_size": 7936 00:18:56.684 }, 00:18:56.684 { 00:18:56.684 "name": "BaseBdev2", 00:18:56.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.684 "is_configured": false, 00:18:56.684 "data_offset": 0, 00:18:56.684 "data_size": 0 00:18:56.684 } 00:18:56.684 ] 00:18:56.684 }' 00:18:56.684 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.684 09:55:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.943 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:56.943 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.943 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:56.943 [2024-11-27 09:55:58.069871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:56.943 [2024-11-27 09:55:58.070086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.201 [2024-11-27 09:55:58.081884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:57.201 [2024-11-27 09:55:58.084194] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:57.201 [2024-11-27 09:55:58.084287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.201 "name": "Existed_Raid", 00:18:57.201 "uuid": "73b55e40-95c0-425b-bbcd-c4a0eefd0765", 00:18:57.201 "strip_size_kb": 0, 00:18:57.201 "state": "configuring", 00:18:57.201 "raid_level": "raid1", 00:18:57.201 "superblock": true, 00:18:57.201 "num_base_bdevs": 2, 00:18:57.201 "num_base_bdevs_discovered": 1, 00:18:57.201 "num_base_bdevs_operational": 2, 00:18:57.201 "base_bdevs_list": [ 00:18:57.201 { 00:18:57.201 "name": "BaseBdev1", 00:18:57.201 "uuid": "7cf1c782-2228-4365-a34e-6d45fbb7f610", 00:18:57.201 "is_configured": true, 00:18:57.201 "data_offset": 256, 00:18:57.201 "data_size": 7936 00:18:57.201 }, 00:18:57.201 { 00:18:57.201 "name": "BaseBdev2", 00:18:57.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.201 "is_configured": false, 00:18:57.201 "data_offset": 0, 00:18:57.201 "data_size": 0 00:18:57.201 } 00:18:57.201 ] 00:18:57.201 }' 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.201 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.460 [2024-11-27 09:55:58.574672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:57.460 [2024-11-27 09:55:58.575179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:57.460 [2024-11-27 09:55:58.575238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:57.460 [2024-11-27 09:55:58.575572] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:57.460 [2024-11-27 09:55:58.575813] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:57.460 [2024-11-27 09:55:58.575865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:57.460 BaseBdev2 00:18:57.460 [2024-11-27 09:55:58.576085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.460 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.720 [ 00:18:57.720 { 00:18:57.720 "name": "BaseBdev2", 00:18:57.720 "aliases": [ 00:18:57.720 "7f931d4d-9ab7-4729-ab84-bf598a235f49" 00:18:57.720 ], 00:18:57.720 "product_name": "Malloc disk", 00:18:57.720 "block_size": 4096, 00:18:57.720 "num_blocks": 8192, 00:18:57.720 "uuid": "7f931d4d-9ab7-4729-ab84-bf598a235f49", 00:18:57.720 "assigned_rate_limits": { 00:18:57.720 "rw_ios_per_sec": 0, 00:18:57.720 "rw_mbytes_per_sec": 0, 00:18:57.720 "r_mbytes_per_sec": 0, 00:18:57.720 "w_mbytes_per_sec": 0 00:18:57.720 }, 00:18:57.720 "claimed": true, 00:18:57.720 "claim_type": "exclusive_write", 00:18:57.720 "zoned": false, 00:18:57.720 "supported_io_types": { 00:18:57.720 "read": true, 00:18:57.720 "write": true, 00:18:57.720 "unmap": true, 00:18:57.720 "flush": true, 00:18:57.720 "reset": true, 00:18:57.720 "nvme_admin": false, 00:18:57.720 "nvme_io": false, 00:18:57.720 "nvme_io_md": false, 00:18:57.720 "write_zeroes": true, 00:18:57.720 "zcopy": true, 00:18:57.720 "get_zone_info": false, 00:18:57.720 "zone_management": false, 00:18:57.720 "zone_append": false, 00:18:57.720 "compare": false, 00:18:57.720 "compare_and_write": false, 00:18:57.720 "abort": true, 00:18:57.720 "seek_hole": false, 00:18:57.720 "seek_data": false, 00:18:57.720 "copy": true, 00:18:57.720 "nvme_iov_md": false 00:18:57.720 }, 00:18:57.720 "memory_domains": [ 00:18:57.720 { 00:18:57.720 "dma_device_id": "system", 00:18:57.720 "dma_device_type": 1 00:18:57.720 }, 00:18:57.720 { 00:18:57.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.720 "dma_device_type": 2 00:18:57.720 } 00:18:57.720 ], 00:18:57.720 "driver_specific": {} 00:18:57.720 } 00:18:57.720 ] 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.720 "name": "Existed_Raid", 00:18:57.720 "uuid": "73b55e40-95c0-425b-bbcd-c4a0eefd0765", 00:18:57.720 "strip_size_kb": 0, 00:18:57.720 "state": "online", 00:18:57.720 "raid_level": "raid1", 00:18:57.720 "superblock": true, 00:18:57.720 "num_base_bdevs": 2, 00:18:57.720 "num_base_bdevs_discovered": 2, 00:18:57.720 "num_base_bdevs_operational": 2, 00:18:57.720 "base_bdevs_list": [ 00:18:57.720 { 00:18:57.720 "name": "BaseBdev1", 00:18:57.720 "uuid": "7cf1c782-2228-4365-a34e-6d45fbb7f610", 00:18:57.720 "is_configured": true, 00:18:57.720 "data_offset": 256, 00:18:57.720 "data_size": 7936 00:18:57.720 }, 00:18:57.720 { 00:18:57.720 "name": "BaseBdev2", 00:18:57.720 "uuid": "7f931d4d-9ab7-4729-ab84-bf598a235f49", 00:18:57.720 "is_configured": true, 00:18:57.720 "data_offset": 256, 00:18:57.720 "data_size": 7936 00:18:57.720 } 00:18:57.720 ] 00:18:57.720 }' 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.720 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.980 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:57.980 09:55:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:57.980 [2024-11-27 09:55:59.014310] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.980 "name": "Existed_Raid", 00:18:57.980 "aliases": [ 00:18:57.980 "73b55e40-95c0-425b-bbcd-c4a0eefd0765" 00:18:57.980 ], 00:18:57.980 "product_name": "Raid Volume", 00:18:57.980 "block_size": 4096, 00:18:57.980 "num_blocks": 7936, 00:18:57.980 "uuid": "73b55e40-95c0-425b-bbcd-c4a0eefd0765", 00:18:57.980 "assigned_rate_limits": { 00:18:57.980 "rw_ios_per_sec": 0, 00:18:57.980 "rw_mbytes_per_sec": 0, 00:18:57.980 "r_mbytes_per_sec": 0, 00:18:57.980 "w_mbytes_per_sec": 0 00:18:57.980 }, 00:18:57.980 "claimed": false, 00:18:57.980 "zoned": false, 00:18:57.980 "supported_io_types": { 00:18:57.980 "read": true, 00:18:57.980 "write": true, 00:18:57.980 "unmap": false, 00:18:57.980 "flush": false, 00:18:57.980 "reset": true, 00:18:57.980 "nvme_admin": false, 00:18:57.980 "nvme_io": false, 00:18:57.980 "nvme_io_md": false, 00:18:57.980 "write_zeroes": true, 00:18:57.980 "zcopy": false, 00:18:57.980 "get_zone_info": false, 00:18:57.980 "zone_management": false, 00:18:57.980 "zone_append": false, 00:18:57.980 "compare": false, 00:18:57.980 "compare_and_write": false, 00:18:57.980 "abort": false, 00:18:57.980 "seek_hole": false, 00:18:57.980 "seek_data": false, 00:18:57.980 "copy": false, 00:18:57.980 "nvme_iov_md": false 00:18:57.980 }, 00:18:57.980 "memory_domains": [ 00:18:57.980 { 00:18:57.980 "dma_device_id": "system", 00:18:57.980 "dma_device_type": 1 00:18:57.980 }, 00:18:57.980 { 00:18:57.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.980 "dma_device_type": 2 00:18:57.980 }, 00:18:57.980 { 00:18:57.980 "dma_device_id": "system", 00:18:57.980 "dma_device_type": 1 00:18:57.980 }, 00:18:57.980 { 00:18:57.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.980 "dma_device_type": 2 00:18:57.980 } 00:18:57.980 ], 00:18:57.980 "driver_specific": { 00:18:57.980 "raid": { 00:18:57.980 "uuid": "73b55e40-95c0-425b-bbcd-c4a0eefd0765", 00:18:57.980 "strip_size_kb": 0, 00:18:57.980 "state": "online", 00:18:57.980 "raid_level": "raid1", 00:18:57.980 "superblock": true, 00:18:57.980 "num_base_bdevs": 2, 00:18:57.980 "num_base_bdevs_discovered": 2, 00:18:57.980 "num_base_bdevs_operational": 2, 00:18:57.980 "base_bdevs_list": [ 00:18:57.980 { 00:18:57.980 "name": "BaseBdev1", 00:18:57.980 "uuid": "7cf1c782-2228-4365-a34e-6d45fbb7f610", 00:18:57.980 "is_configured": true, 00:18:57.980 "data_offset": 256, 00:18:57.980 "data_size": 7936 00:18:57.980 }, 00:18:57.980 { 00:18:57.980 "name": "BaseBdev2", 00:18:57.980 "uuid": "7f931d4d-9ab7-4729-ab84-bf598a235f49", 00:18:57.980 "is_configured": true, 00:18:57.980 "data_offset": 256, 00:18:57.980 "data_size": 7936 00:18:57.980 } 00:18:57.980 ] 00:18:57.980 } 00:18:57.980 } 00:18:57.980 }' 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:57.980 BaseBdev2' 00:18:57.980 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.239 [2024-11-27 09:55:59.205750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.239 "name": "Existed_Raid", 00:18:58.239 "uuid": "73b55e40-95c0-425b-bbcd-c4a0eefd0765", 00:18:58.239 "strip_size_kb": 0, 00:18:58.239 "state": "online", 00:18:58.239 "raid_level": "raid1", 00:18:58.239 "superblock": true, 00:18:58.239 "num_base_bdevs": 2, 00:18:58.239 "num_base_bdevs_discovered": 1, 00:18:58.239 "num_base_bdevs_operational": 1, 00:18:58.239 "base_bdevs_list": [ 00:18:58.239 { 00:18:58.239 "name": null, 00:18:58.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.239 "is_configured": false, 00:18:58.239 "data_offset": 0, 00:18:58.239 "data_size": 7936 00:18:58.239 }, 00:18:58.239 { 00:18:58.239 "name": "BaseBdev2", 00:18:58.239 "uuid": "7f931d4d-9ab7-4729-ab84-bf598a235f49", 00:18:58.239 "is_configured": true, 00:18:58.239 "data_offset": 256, 00:18:58.239 "data_size": 7936 00:18:58.239 } 00:18:58.239 ] 00:18:58.239 }' 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.239 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:58.808 [2024-11-27 09:55:59.814625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.808 [2024-11-27 09:55:59.814766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.808 [2024-11-27 09:55:59.920254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.808 [2024-11-27 09:55:59.920340] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.808 [2024-11-27 09:55:59.920355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.808 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.809 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.809 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86256 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86256 ']' 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86256 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.069 09:55:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86256 00:18:59.069 09:56:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:59.069 09:56:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:59.069 09:56:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86256' 00:18:59.069 killing process with pid 86256 00:18:59.069 09:56:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86256 00:18:59.069 [2024-11-27 09:56:00.018713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.069 09:56:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86256 00:18:59.069 [2024-11-27 09:56:00.037560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.448 09:56:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:00.448 00:19:00.448 real 0m5.160s 00:19:00.448 user 0m7.168s 00:19:00.448 sys 0m1.026s 00:19:00.448 ************************************ 00:19:00.448 END TEST raid_state_function_test_sb_4k 00:19:00.448 ************************************ 00:19:00.448 09:56:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.448 09:56:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.448 09:56:01 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:00.448 09:56:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:00.448 09:56:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.448 09:56:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:00.449 ************************************ 00:19:00.449 START TEST raid_superblock_test_4k 00:19:00.449 ************************************ 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:00.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86508 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86508 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86508 ']' 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:00.449 09:56:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:00.449 [2024-11-27 09:56:01.459928] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:19:00.449 [2024-11-27 09:56:01.460121] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86508 ] 00:19:00.708 [2024-11-27 09:56:01.641535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.708 [2024-11-27 09:56:01.784061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.968 [2024-11-27 09:56:02.028365] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:00.968 [2024-11-27 09:56:02.028436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.227 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.487 malloc1 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.487 [2024-11-27 09:56:02.373807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:01.487 [2024-11-27 09:56:02.373986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.487 [2024-11-27 09:56:02.374052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:01.487 [2024-11-27 09:56:02.374088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.487 [2024-11-27 09:56:02.376714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.487 [2024-11-27 09:56:02.376822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:01.487 pt1 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.487 malloc2 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.487 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.487 [2024-11-27 09:56:02.440821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.487 [2024-11-27 09:56:02.441008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.488 [2024-11-27 09:56:02.441074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:01.488 [2024-11-27 09:56:02.441085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.488 [2024-11-27 09:56:02.443744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.488 [2024-11-27 09:56:02.443822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.488 pt2 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.488 [2024-11-27 09:56:02.452883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:01.488 [2024-11-27 09:56:02.455259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.488 [2024-11-27 09:56:02.455552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:01.488 [2024-11-27 09:56:02.455616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:01.488 [2024-11-27 09:56:02.455979] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:01.488 [2024-11-27 09:56:02.456253] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:01.488 [2024-11-27 09:56:02.456277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:01.488 [2024-11-27 09:56:02.456503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.488 "name": "raid_bdev1", 00:19:01.488 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:01.488 "strip_size_kb": 0, 00:19:01.488 "state": "online", 00:19:01.488 "raid_level": "raid1", 00:19:01.488 "superblock": true, 00:19:01.488 "num_base_bdevs": 2, 00:19:01.488 "num_base_bdevs_discovered": 2, 00:19:01.488 "num_base_bdevs_operational": 2, 00:19:01.488 "base_bdevs_list": [ 00:19:01.488 { 00:19:01.488 "name": "pt1", 00:19:01.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:01.488 "is_configured": true, 00:19:01.488 "data_offset": 256, 00:19:01.488 "data_size": 7936 00:19:01.488 }, 00:19:01.488 { 00:19:01.488 "name": "pt2", 00:19:01.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:01.488 "is_configured": true, 00:19:01.488 "data_offset": 256, 00:19:01.488 "data_size": 7936 00:19:01.488 } 00:19:01.488 ] 00:19:01.488 }' 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.488 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.059 [2024-11-27 09:56:02.900484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:02.059 "name": "raid_bdev1", 00:19:02.059 "aliases": [ 00:19:02.059 "f44ee054-182f-4d8c-bfb2-055412f43c0d" 00:19:02.059 ], 00:19:02.059 "product_name": "Raid Volume", 00:19:02.059 "block_size": 4096, 00:19:02.059 "num_blocks": 7936, 00:19:02.059 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:02.059 "assigned_rate_limits": { 00:19:02.059 "rw_ios_per_sec": 0, 00:19:02.059 "rw_mbytes_per_sec": 0, 00:19:02.059 "r_mbytes_per_sec": 0, 00:19:02.059 "w_mbytes_per_sec": 0 00:19:02.059 }, 00:19:02.059 "claimed": false, 00:19:02.059 "zoned": false, 00:19:02.059 "supported_io_types": { 00:19:02.059 "read": true, 00:19:02.059 "write": true, 00:19:02.059 "unmap": false, 00:19:02.059 "flush": false, 00:19:02.059 "reset": true, 00:19:02.059 "nvme_admin": false, 00:19:02.059 "nvme_io": false, 00:19:02.059 "nvme_io_md": false, 00:19:02.059 "write_zeroes": true, 00:19:02.059 "zcopy": false, 00:19:02.059 "get_zone_info": false, 00:19:02.059 "zone_management": false, 00:19:02.059 "zone_append": false, 00:19:02.059 "compare": false, 00:19:02.059 "compare_and_write": false, 00:19:02.059 "abort": false, 00:19:02.059 "seek_hole": false, 00:19:02.059 "seek_data": false, 00:19:02.059 "copy": false, 00:19:02.059 "nvme_iov_md": false 00:19:02.059 }, 00:19:02.059 "memory_domains": [ 00:19:02.059 { 00:19:02.059 "dma_device_id": "system", 00:19:02.059 "dma_device_type": 1 00:19:02.059 }, 00:19:02.059 { 00:19:02.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.059 "dma_device_type": 2 00:19:02.059 }, 00:19:02.059 { 00:19:02.059 "dma_device_id": "system", 00:19:02.059 "dma_device_type": 1 00:19:02.059 }, 00:19:02.059 { 00:19:02.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.059 "dma_device_type": 2 00:19:02.059 } 00:19:02.059 ], 00:19:02.059 "driver_specific": { 00:19:02.059 "raid": { 00:19:02.059 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:02.059 "strip_size_kb": 0, 00:19:02.059 "state": "online", 00:19:02.059 "raid_level": "raid1", 00:19:02.059 "superblock": true, 00:19:02.059 "num_base_bdevs": 2, 00:19:02.059 "num_base_bdevs_discovered": 2, 00:19:02.059 "num_base_bdevs_operational": 2, 00:19:02.059 "base_bdevs_list": [ 00:19:02.059 { 00:19:02.059 "name": "pt1", 00:19:02.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.059 "is_configured": true, 00:19:02.059 "data_offset": 256, 00:19:02.059 "data_size": 7936 00:19:02.059 }, 00:19:02.059 { 00:19:02.059 "name": "pt2", 00:19:02.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.059 "is_configured": true, 00:19:02.059 "data_offset": 256, 00:19:02.059 "data_size": 7936 00:19:02.059 } 00:19:02.059 ] 00:19:02.059 } 00:19:02.059 } 00:19:02.059 }' 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:02.059 pt2' 00:19:02.059 09:56:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.059 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:02.059 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.059 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.059 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:02.059 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.059 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.060 [2024-11-27 09:56:03.132099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f44ee054-182f-4d8c-bfb2-055412f43c0d 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z f44ee054-182f-4d8c-bfb2-055412f43c0d ']' 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.060 [2024-11-27 09:56:03.175665] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.060 [2024-11-27 09:56:03.175797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:02.060 [2024-11-27 09:56:03.175956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:02.060 [2024-11-27 09:56:03.176062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:02.060 [2024-11-27 09:56:03.176134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.060 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.320 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.320 [2024-11-27 09:56:03.319461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:02.320 [2024-11-27 09:56:03.321840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:02.321 [2024-11-27 09:56:03.321943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:02.321 [2024-11-27 09:56:03.322029] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:02.321 [2024-11-27 09:56:03.322063] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:02.321 [2024-11-27 09:56:03.322076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:02.321 request: 00:19:02.321 { 00:19:02.321 "name": "raid_bdev1", 00:19:02.321 "raid_level": "raid1", 00:19:02.321 "base_bdevs": [ 00:19:02.321 "malloc1", 00:19:02.321 "malloc2" 00:19:02.321 ], 00:19:02.321 "superblock": false, 00:19:02.321 "method": "bdev_raid_create", 00:19:02.321 "req_id": 1 00:19:02.321 } 00:19:02.321 Got JSON-RPC error response 00:19:02.321 response: 00:19:02.321 { 00:19:02.321 "code": -17, 00:19:02.321 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:02.321 } 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.321 [2024-11-27 09:56:03.387320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:02.321 [2024-11-27 09:56:03.387532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.321 [2024-11-27 09:56:03.387576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:02.321 [2024-11-27 09:56:03.387620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.321 [2024-11-27 09:56:03.390360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.321 [2024-11-27 09:56:03.390462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:02.321 [2024-11-27 09:56:03.390619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:02.321 [2024-11-27 09:56:03.390719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:02.321 pt1 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.321 "name": "raid_bdev1", 00:19:02.321 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:02.321 "strip_size_kb": 0, 00:19:02.321 "state": "configuring", 00:19:02.321 "raid_level": "raid1", 00:19:02.321 "superblock": true, 00:19:02.321 "num_base_bdevs": 2, 00:19:02.321 "num_base_bdevs_discovered": 1, 00:19:02.321 "num_base_bdevs_operational": 2, 00:19:02.321 "base_bdevs_list": [ 00:19:02.321 { 00:19:02.321 "name": "pt1", 00:19:02.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.321 "is_configured": true, 00:19:02.321 "data_offset": 256, 00:19:02.321 "data_size": 7936 00:19:02.321 }, 00:19:02.321 { 00:19:02.321 "name": null, 00:19:02.321 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.321 "is_configured": false, 00:19:02.321 "data_offset": 256, 00:19:02.321 "data_size": 7936 00:19:02.321 } 00:19:02.321 ] 00:19:02.321 }' 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.321 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.891 [2024-11-27 09:56:03.778645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:02.891 [2024-11-27 09:56:03.778865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:02.891 [2024-11-27 09:56:03.778912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:02.891 [2024-11-27 09:56:03.778947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:02.891 [2024-11-27 09:56:03.779555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:02.891 [2024-11-27 09:56:03.779631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:02.891 [2024-11-27 09:56:03.779765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:02.891 [2024-11-27 09:56:03.779828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:02.891 [2024-11-27 09:56:03.780022] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:02.891 [2024-11-27 09:56:03.780067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:02.891 [2024-11-27 09:56:03.780368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:02.891 [2024-11-27 09:56:03.780588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:02.891 [2024-11-27 09:56:03.780629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:02.891 [2024-11-27 09:56:03.780817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.891 pt2 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.891 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.891 "name": "raid_bdev1", 00:19:02.891 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:02.891 "strip_size_kb": 0, 00:19:02.891 "state": "online", 00:19:02.891 "raid_level": "raid1", 00:19:02.891 "superblock": true, 00:19:02.891 "num_base_bdevs": 2, 00:19:02.892 "num_base_bdevs_discovered": 2, 00:19:02.892 "num_base_bdevs_operational": 2, 00:19:02.892 "base_bdevs_list": [ 00:19:02.892 { 00:19:02.892 "name": "pt1", 00:19:02.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:02.892 "is_configured": true, 00:19:02.892 "data_offset": 256, 00:19:02.892 "data_size": 7936 00:19:02.892 }, 00:19:02.892 { 00:19:02.892 "name": "pt2", 00:19:02.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:02.892 "is_configured": true, 00:19:02.892 "data_offset": 256, 00:19:02.892 "data_size": 7936 00:19:02.892 } 00:19:02.892 ] 00:19:02.892 }' 00:19:02.892 09:56:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.892 09:56:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.151 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.152 [2024-11-27 09:56:04.218159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.152 "name": "raid_bdev1", 00:19:03.152 "aliases": [ 00:19:03.152 "f44ee054-182f-4d8c-bfb2-055412f43c0d" 00:19:03.152 ], 00:19:03.152 "product_name": "Raid Volume", 00:19:03.152 "block_size": 4096, 00:19:03.152 "num_blocks": 7936, 00:19:03.152 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:03.152 "assigned_rate_limits": { 00:19:03.152 "rw_ios_per_sec": 0, 00:19:03.152 "rw_mbytes_per_sec": 0, 00:19:03.152 "r_mbytes_per_sec": 0, 00:19:03.152 "w_mbytes_per_sec": 0 00:19:03.152 }, 00:19:03.152 "claimed": false, 00:19:03.152 "zoned": false, 00:19:03.152 "supported_io_types": { 00:19:03.152 "read": true, 00:19:03.152 "write": true, 00:19:03.152 "unmap": false, 00:19:03.152 "flush": false, 00:19:03.152 "reset": true, 00:19:03.152 "nvme_admin": false, 00:19:03.152 "nvme_io": false, 00:19:03.152 "nvme_io_md": false, 00:19:03.152 "write_zeroes": true, 00:19:03.152 "zcopy": false, 00:19:03.152 "get_zone_info": false, 00:19:03.152 "zone_management": false, 00:19:03.152 "zone_append": false, 00:19:03.152 "compare": false, 00:19:03.152 "compare_and_write": false, 00:19:03.152 "abort": false, 00:19:03.152 "seek_hole": false, 00:19:03.152 "seek_data": false, 00:19:03.152 "copy": false, 00:19:03.152 "nvme_iov_md": false 00:19:03.152 }, 00:19:03.152 "memory_domains": [ 00:19:03.152 { 00:19:03.152 "dma_device_id": "system", 00:19:03.152 "dma_device_type": 1 00:19:03.152 }, 00:19:03.152 { 00:19:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.152 "dma_device_type": 2 00:19:03.152 }, 00:19:03.152 { 00:19:03.152 "dma_device_id": "system", 00:19:03.152 "dma_device_type": 1 00:19:03.152 }, 00:19:03.152 { 00:19:03.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.152 "dma_device_type": 2 00:19:03.152 } 00:19:03.152 ], 00:19:03.152 "driver_specific": { 00:19:03.152 "raid": { 00:19:03.152 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:03.152 "strip_size_kb": 0, 00:19:03.152 "state": "online", 00:19:03.152 "raid_level": "raid1", 00:19:03.152 "superblock": true, 00:19:03.152 "num_base_bdevs": 2, 00:19:03.152 "num_base_bdevs_discovered": 2, 00:19:03.152 "num_base_bdevs_operational": 2, 00:19:03.152 "base_bdevs_list": [ 00:19:03.152 { 00:19:03.152 "name": "pt1", 00:19:03.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:03.152 "is_configured": true, 00:19:03.152 "data_offset": 256, 00:19:03.152 "data_size": 7936 00:19:03.152 }, 00:19:03.152 { 00:19:03.152 "name": "pt2", 00:19:03.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.152 "is_configured": true, 00:19:03.152 "data_offset": 256, 00:19:03.152 "data_size": 7936 00:19:03.152 } 00:19:03.152 ] 00:19:03.152 } 00:19:03.152 } 00:19:03.152 }' 00:19:03.152 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:03.412 pt2' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.412 [2024-11-27 09:56:04.461714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' f44ee054-182f-4d8c-bfb2-055412f43c0d '!=' f44ee054-182f-4d8c-bfb2-055412f43c0d ']' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.412 [2024-11-27 09:56:04.509421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.412 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.671 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.672 "name": "raid_bdev1", 00:19:03.672 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:03.672 "strip_size_kb": 0, 00:19:03.672 "state": "online", 00:19:03.672 "raid_level": "raid1", 00:19:03.672 "superblock": true, 00:19:03.672 "num_base_bdevs": 2, 00:19:03.672 "num_base_bdevs_discovered": 1, 00:19:03.672 "num_base_bdevs_operational": 1, 00:19:03.672 "base_bdevs_list": [ 00:19:03.672 { 00:19:03.672 "name": null, 00:19:03.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.672 "is_configured": false, 00:19:03.672 "data_offset": 0, 00:19:03.672 "data_size": 7936 00:19:03.672 }, 00:19:03.672 { 00:19:03.672 "name": "pt2", 00:19:03.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.672 "is_configured": true, 00:19:03.672 "data_offset": 256, 00:19:03.672 "data_size": 7936 00:19:03.672 } 00:19:03.672 ] 00:19:03.672 }' 00:19:03.672 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.672 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.932 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:03.932 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.932 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.932 [2024-11-27 09:56:04.920801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:03.932 [2024-11-27 09:56:04.920944] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.932 [2024-11-27 09:56:04.921094] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.932 [2024-11-27 09:56:04.921193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.932 [2024-11-27 09:56:04.921264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:03.932 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.932 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.932 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.933 09:56:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.933 [2024-11-27 09:56:04.996719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.933 [2024-11-27 09:56:04.996819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.933 [2024-11-27 09:56:04.996840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:03.933 [2024-11-27 09:56:04.996852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.933 [2024-11-27 09:56:04.999559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.933 [2024-11-27 09:56:04.999670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.933 [2024-11-27 09:56:04.999791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.933 [2024-11-27 09:56:04.999851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.933 [2024-11-27 09:56:04.999973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:03.933 [2024-11-27 09:56:04.999987] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:03.933 [2024-11-27 09:56:05.000278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:03.933 [2024-11-27 09:56:05.000449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:03.933 [2024-11-27 09:56:05.000460] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:03.933 [2024-11-27 09:56:05.000681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.933 pt2 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.933 "name": "raid_bdev1", 00:19:03.933 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:03.933 "strip_size_kb": 0, 00:19:03.933 "state": "online", 00:19:03.933 "raid_level": "raid1", 00:19:03.933 "superblock": true, 00:19:03.933 "num_base_bdevs": 2, 00:19:03.933 "num_base_bdevs_discovered": 1, 00:19:03.933 "num_base_bdevs_operational": 1, 00:19:03.933 "base_bdevs_list": [ 00:19:03.933 { 00:19:03.933 "name": null, 00:19:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.933 "is_configured": false, 00:19:03.933 "data_offset": 256, 00:19:03.933 "data_size": 7936 00:19:03.933 }, 00:19:03.933 { 00:19:03.933 "name": "pt2", 00:19:03.933 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:03.933 "is_configured": true, 00:19:03.933 "data_offset": 256, 00:19:03.933 "data_size": 7936 00:19:03.933 } 00:19:03.933 ] 00:19:03.933 }' 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.933 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 [2024-11-27 09:56:05.463994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.504 [2024-11-27 09:56:05.464147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:04.504 [2024-11-27 09:56:05.464305] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:04.504 [2024-11-27 09:56:05.464403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:04.504 [2024-11-27 09:56:05.464484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 [2024-11-27 09:56:05.527936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:04.504 [2024-11-27 09:56:05.528158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:04.504 [2024-11-27 09:56:05.528206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:04.504 [2024-11-27 09:56:05.528242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:04.504 [2024-11-27 09:56:05.531032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:04.504 [2024-11-27 09:56:05.531154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:04.504 [2024-11-27 09:56:05.531307] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:04.504 [2024-11-27 09:56:05.531398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:04.504 [2024-11-27 09:56:05.531619] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:04.504 [2024-11-27 09:56:05.531678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:04.504 [2024-11-27 09:56:05.531722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:04.504 [2024-11-27 09:56:05.531837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:04.504 [2024-11-27 09:56:05.531959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:04.504 [2024-11-27 09:56:05.532006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:04.504 [2024-11-27 09:56:05.532336] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:04.504 [2024-11-27 09:56:05.532545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:04.504 [2024-11-27 09:56:05.532604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:04.504 [2024-11-27 09:56:05.532880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.504 pt1 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.504 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:04.504 "name": "raid_bdev1", 00:19:04.504 "uuid": "f44ee054-182f-4d8c-bfb2-055412f43c0d", 00:19:04.504 "strip_size_kb": 0, 00:19:04.504 "state": "online", 00:19:04.504 "raid_level": "raid1", 00:19:04.504 "superblock": true, 00:19:04.504 "num_base_bdevs": 2, 00:19:04.504 "num_base_bdevs_discovered": 1, 00:19:04.504 "num_base_bdevs_operational": 1, 00:19:04.504 "base_bdevs_list": [ 00:19:04.504 { 00:19:04.504 "name": null, 00:19:04.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:04.505 "is_configured": false, 00:19:04.505 "data_offset": 256, 00:19:04.505 "data_size": 7936 00:19:04.505 }, 00:19:04.505 { 00:19:04.505 "name": "pt2", 00:19:04.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:04.505 "is_configured": true, 00:19:04.505 "data_offset": 256, 00:19:04.505 "data_size": 7936 00:19:04.505 } 00:19:04.505 ] 00:19:04.505 }' 00:19:04.505 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:04.505 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.074 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:05.074 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.074 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.074 09:56:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:05.074 09:56:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:05.074 [2024-11-27 09:56:06.027491] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' f44ee054-182f-4d8c-bfb2-055412f43c0d '!=' f44ee054-182f-4d8c-bfb2-055412f43c0d ']' 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86508 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86508 ']' 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86508 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86508 00:19:05.074 killing process with pid 86508 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86508' 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86508 00:19:05.074 [2024-11-27 09:56:06.106583] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:05.074 [2024-11-27 09:56:06.106725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:05.074 09:56:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86508 00:19:05.074 [2024-11-27 09:56:06.106787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:05.074 [2024-11-27 09:56:06.106806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:05.333 [2024-11-27 09:56:06.332456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.711 09:56:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:06.711 00:19:06.711 real 0m6.213s 00:19:06.711 user 0m9.113s 00:19:06.711 sys 0m1.308s 00:19:06.711 ************************************ 00:19:06.711 END TEST raid_superblock_test_4k 00:19:06.711 ************************************ 00:19:06.711 09:56:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.711 09:56:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.711 09:56:07 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:06.711 09:56:07 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:06.711 09:56:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:06.711 09:56:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.711 09:56:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.711 ************************************ 00:19:06.711 START TEST raid_rebuild_test_sb_4k 00:19:06.711 ************************************ 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.711 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86831 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86831 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86831 ']' 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.712 09:56:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:06.712 [2024-11-27 09:56:07.748577] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:19:06.712 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:06.712 Zero copy mechanism will not be used. 00:19:06.712 [2024-11-27 09:56:07.748854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86831 ] 00:19:06.970 [2024-11-27 09:56:07.929233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.970 [2024-11-27 09:56:08.073233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.229 [2024-11-27 09:56:08.316371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.229 [2024-11-27 09:56:08.316441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.489 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.489 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:07.489 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.489 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:07.489 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.489 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 BaseBdev1_malloc 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 [2024-11-27 09:56:08.657945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:07.751 [2024-11-27 09:56:08.658054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.751 [2024-11-27 09:56:08.658080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:07.751 [2024-11-27 09:56:08.658093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.751 [2024-11-27 09:56:08.660732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.751 [2024-11-27 09:56:08.660843] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:07.751 BaseBdev1 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 BaseBdev2_malloc 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 [2024-11-27 09:56:08.717639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:07.751 [2024-11-27 09:56:08.717742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.751 [2024-11-27 09:56:08.717769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:07.751 [2024-11-27 09:56:08.717782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.751 [2024-11-27 09:56:08.720431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.751 [2024-11-27 09:56:08.720475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:07.751 BaseBdev2 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 spare_malloc 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 spare_delay 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 [2024-11-27 09:56:08.806942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:07.751 [2024-11-27 09:56:08.807037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:07.751 [2024-11-27 09:56:08.807068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:07.751 [2024-11-27 09:56:08.807080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:07.751 [2024-11-27 09:56:08.809882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:07.751 [2024-11-27 09:56:08.809931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:07.751 spare 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.751 [2024-11-27 09:56:08.819026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.751 [2024-11-27 09:56:08.821454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.751 [2024-11-27 09:56:08.821681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:07.751 [2024-11-27 09:56:08.821699] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:07.751 [2024-11-27 09:56:08.822028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:07.751 [2024-11-27 09:56:08.822243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:07.751 [2024-11-27 09:56:08.822260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:07.751 [2024-11-27 09:56:08.822494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.751 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.752 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.752 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:07.752 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.752 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.752 "name": "raid_bdev1", 00:19:07.752 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:07.752 "strip_size_kb": 0, 00:19:07.752 "state": "online", 00:19:07.752 "raid_level": "raid1", 00:19:07.752 "superblock": true, 00:19:07.752 "num_base_bdevs": 2, 00:19:07.752 "num_base_bdevs_discovered": 2, 00:19:07.752 "num_base_bdevs_operational": 2, 00:19:07.752 "base_bdevs_list": [ 00:19:07.752 { 00:19:07.752 "name": "BaseBdev1", 00:19:07.752 "uuid": "0f8f45ba-b860-5d54-bda1-38bff14d809e", 00:19:07.752 "is_configured": true, 00:19:07.752 "data_offset": 256, 00:19:07.752 "data_size": 7936 00:19:07.752 }, 00:19:07.752 { 00:19:07.752 "name": "BaseBdev2", 00:19:07.752 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:07.752 "is_configured": true, 00:19:07.752 "data_offset": 256, 00:19:07.752 "data_size": 7936 00:19:07.752 } 00:19:07.752 ] 00:19:07.752 }' 00:19:07.752 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.752 09:56:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.362 [2024-11-27 09:56:09.270527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.362 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:08.620 [2024-11-27 09:56:09.561829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:08.620 /dev/nbd0 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.620 1+0 records in 00:19:08.620 1+0 records out 00:19:08.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471496 s, 8.7 MB/s 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:08.620 09:56:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:09.189 7936+0 records in 00:19:09.189 7936+0 records out 00:19:09.189 32505856 bytes (33 MB, 31 MiB) copied, 0.634856 s, 51.2 MB/s 00:19:09.189 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:09.189 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:09.189 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.189 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.189 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:09.189 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.190 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:09.449 [2024-11-27 09:56:10.497464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.449 [2024-11-27 09:56:10.517639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.449 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.450 "name": "raid_bdev1", 00:19:09.450 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:09.450 "strip_size_kb": 0, 00:19:09.450 "state": "online", 00:19:09.450 "raid_level": "raid1", 00:19:09.450 "superblock": true, 00:19:09.450 "num_base_bdevs": 2, 00:19:09.450 "num_base_bdevs_discovered": 1, 00:19:09.450 "num_base_bdevs_operational": 1, 00:19:09.450 "base_bdevs_list": [ 00:19:09.450 { 00:19:09.450 "name": null, 00:19:09.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.450 "is_configured": false, 00:19:09.450 "data_offset": 0, 00:19:09.450 "data_size": 7936 00:19:09.450 }, 00:19:09.450 { 00:19:09.450 "name": "BaseBdev2", 00:19:09.450 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:09.450 "is_configured": true, 00:19:09.450 "data_offset": 256, 00:19:09.450 "data_size": 7936 00:19:09.450 } 00:19:09.450 ] 00:19:09.450 }' 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.450 09:56:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.018 09:56:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.018 09:56:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.018 09:56:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.018 [2024-11-27 09:56:11.008831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.018 [2024-11-27 09:56:11.026788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:10.018 09:56:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.018 09:56:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:10.018 [2024-11-27 09:56:11.029129] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.957 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.957 "name": "raid_bdev1", 00:19:10.957 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:10.957 "strip_size_kb": 0, 00:19:10.957 "state": "online", 00:19:10.957 "raid_level": "raid1", 00:19:10.957 "superblock": true, 00:19:10.957 "num_base_bdevs": 2, 00:19:10.957 "num_base_bdevs_discovered": 2, 00:19:10.957 "num_base_bdevs_operational": 2, 00:19:10.957 "process": { 00:19:10.957 "type": "rebuild", 00:19:10.957 "target": "spare", 00:19:10.957 "progress": { 00:19:10.957 "blocks": 2560, 00:19:10.957 "percent": 32 00:19:10.957 } 00:19:10.957 }, 00:19:10.957 "base_bdevs_list": [ 00:19:10.957 { 00:19:10.957 "name": "spare", 00:19:10.957 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:10.957 "is_configured": true, 00:19:10.957 "data_offset": 256, 00:19:10.957 "data_size": 7936 00:19:10.957 }, 00:19:10.957 { 00:19:10.957 "name": "BaseBdev2", 00:19:10.957 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:10.957 "is_configured": true, 00:19:10.957 "data_offset": 256, 00:19:10.957 "data_size": 7936 00:19:10.957 } 00:19:10.957 ] 00:19:10.957 }' 00:19:11.215 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.215 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 [2024-11-27 09:56:12.184965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.216 [2024-11-27 09:56:12.239871] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:11.216 [2024-11-27 09:56:12.240014] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.216 [2024-11-27 09:56:12.240033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:11.216 [2024-11-27 09:56:12.240045] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.216 "name": "raid_bdev1", 00:19:11.216 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:11.216 "strip_size_kb": 0, 00:19:11.216 "state": "online", 00:19:11.216 "raid_level": "raid1", 00:19:11.216 "superblock": true, 00:19:11.216 "num_base_bdevs": 2, 00:19:11.216 "num_base_bdevs_discovered": 1, 00:19:11.216 "num_base_bdevs_operational": 1, 00:19:11.216 "base_bdevs_list": [ 00:19:11.216 { 00:19:11.216 "name": null, 00:19:11.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.216 "is_configured": false, 00:19:11.216 "data_offset": 0, 00:19:11.216 "data_size": 7936 00:19:11.216 }, 00:19:11.216 { 00:19:11.216 "name": "BaseBdev2", 00:19:11.216 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:11.216 "is_configured": true, 00:19:11.216 "data_offset": 256, 00:19:11.216 "data_size": 7936 00:19:11.216 } 00:19:11.216 ] 00:19:11.216 }' 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.216 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.787 "name": "raid_bdev1", 00:19:11.787 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:11.787 "strip_size_kb": 0, 00:19:11.787 "state": "online", 00:19:11.787 "raid_level": "raid1", 00:19:11.787 "superblock": true, 00:19:11.787 "num_base_bdevs": 2, 00:19:11.787 "num_base_bdevs_discovered": 1, 00:19:11.787 "num_base_bdevs_operational": 1, 00:19:11.787 "base_bdevs_list": [ 00:19:11.787 { 00:19:11.787 "name": null, 00:19:11.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.787 "is_configured": false, 00:19:11.787 "data_offset": 0, 00:19:11.787 "data_size": 7936 00:19:11.787 }, 00:19:11.787 { 00:19:11.787 "name": "BaseBdev2", 00:19:11.787 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:11.787 "is_configured": true, 00:19:11.787 "data_offset": 256, 00:19:11.787 "data_size": 7936 00:19:11.787 } 00:19:11.787 ] 00:19:11.787 }' 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:11.787 [2024-11-27 09:56:12.846708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:11.787 [2024-11-27 09:56:12.865337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.787 09:56:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:11.787 [2024-11-27 09:56:12.867765] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.168 "name": "raid_bdev1", 00:19:13.168 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:13.168 "strip_size_kb": 0, 00:19:13.168 "state": "online", 00:19:13.168 "raid_level": "raid1", 00:19:13.168 "superblock": true, 00:19:13.168 "num_base_bdevs": 2, 00:19:13.168 "num_base_bdevs_discovered": 2, 00:19:13.168 "num_base_bdevs_operational": 2, 00:19:13.168 "process": { 00:19:13.168 "type": "rebuild", 00:19:13.168 "target": "spare", 00:19:13.168 "progress": { 00:19:13.168 "blocks": 2560, 00:19:13.168 "percent": 32 00:19:13.168 } 00:19:13.168 }, 00:19:13.168 "base_bdevs_list": [ 00:19:13.168 { 00:19:13.168 "name": "spare", 00:19:13.168 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:13.168 "is_configured": true, 00:19:13.168 "data_offset": 256, 00:19:13.168 "data_size": 7936 00:19:13.168 }, 00:19:13.168 { 00:19:13.168 "name": "BaseBdev2", 00:19:13.168 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:13.168 "is_configured": true, 00:19:13.168 "data_offset": 256, 00:19:13.168 "data_size": 7936 00:19:13.168 } 00:19:13.168 ] 00:19:13.168 }' 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.168 09:56:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:13.168 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.168 "name": "raid_bdev1", 00:19:13.168 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:13.168 "strip_size_kb": 0, 00:19:13.168 "state": "online", 00:19:13.168 "raid_level": "raid1", 00:19:13.168 "superblock": true, 00:19:13.168 "num_base_bdevs": 2, 00:19:13.168 "num_base_bdevs_discovered": 2, 00:19:13.168 "num_base_bdevs_operational": 2, 00:19:13.168 "process": { 00:19:13.168 "type": "rebuild", 00:19:13.168 "target": "spare", 00:19:13.168 "progress": { 00:19:13.168 "blocks": 2816, 00:19:13.168 "percent": 35 00:19:13.168 } 00:19:13.168 }, 00:19:13.168 "base_bdevs_list": [ 00:19:13.168 { 00:19:13.168 "name": "spare", 00:19:13.168 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:13.168 "is_configured": true, 00:19:13.168 "data_offset": 256, 00:19:13.168 "data_size": 7936 00:19:13.168 }, 00:19:13.168 { 00:19:13.168 "name": "BaseBdev2", 00:19:13.168 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:13.168 "is_configured": true, 00:19:13.168 "data_offset": 256, 00:19:13.168 "data_size": 7936 00:19:13.168 } 00:19:13.168 ] 00:19:13.168 }' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.168 09:56:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.106 "name": "raid_bdev1", 00:19:14.106 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:14.106 "strip_size_kb": 0, 00:19:14.106 "state": "online", 00:19:14.106 "raid_level": "raid1", 00:19:14.106 "superblock": true, 00:19:14.106 "num_base_bdevs": 2, 00:19:14.106 "num_base_bdevs_discovered": 2, 00:19:14.106 "num_base_bdevs_operational": 2, 00:19:14.106 "process": { 00:19:14.106 "type": "rebuild", 00:19:14.106 "target": "spare", 00:19:14.106 "progress": { 00:19:14.106 "blocks": 5632, 00:19:14.106 "percent": 70 00:19:14.106 } 00:19:14.106 }, 00:19:14.106 "base_bdevs_list": [ 00:19:14.106 { 00:19:14.106 "name": "spare", 00:19:14.106 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:14.106 "is_configured": true, 00:19:14.106 "data_offset": 256, 00:19:14.106 "data_size": 7936 00:19:14.106 }, 00:19:14.106 { 00:19:14.106 "name": "BaseBdev2", 00:19:14.106 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:14.106 "is_configured": true, 00:19:14.106 "data_offset": 256, 00:19:14.106 "data_size": 7936 00:19:14.106 } 00:19:14.106 ] 00:19:14.106 }' 00:19:14.106 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.365 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.365 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.365 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.365 09:56:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:14.933 [2024-11-27 09:56:15.995363] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:14.933 [2024-11-27 09:56:15.995596] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:14.933 [2024-11-27 09:56:15.995805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:15.192 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.192 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.192 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.192 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.192 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.192 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.452 "name": "raid_bdev1", 00:19:15.452 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:15.452 "strip_size_kb": 0, 00:19:15.452 "state": "online", 00:19:15.452 "raid_level": "raid1", 00:19:15.452 "superblock": true, 00:19:15.452 "num_base_bdevs": 2, 00:19:15.452 "num_base_bdevs_discovered": 2, 00:19:15.452 "num_base_bdevs_operational": 2, 00:19:15.452 "base_bdevs_list": [ 00:19:15.452 { 00:19:15.452 "name": "spare", 00:19:15.452 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:15.452 "is_configured": true, 00:19:15.452 "data_offset": 256, 00:19:15.452 "data_size": 7936 00:19:15.452 }, 00:19:15.452 { 00:19:15.452 "name": "BaseBdev2", 00:19:15.452 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:15.452 "is_configured": true, 00:19:15.452 "data_offset": 256, 00:19:15.452 "data_size": 7936 00:19:15.452 } 00:19:15.452 ] 00:19:15.452 }' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.452 "name": "raid_bdev1", 00:19:15.452 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:15.452 "strip_size_kb": 0, 00:19:15.452 "state": "online", 00:19:15.452 "raid_level": "raid1", 00:19:15.452 "superblock": true, 00:19:15.452 "num_base_bdevs": 2, 00:19:15.452 "num_base_bdevs_discovered": 2, 00:19:15.452 "num_base_bdevs_operational": 2, 00:19:15.452 "base_bdevs_list": [ 00:19:15.452 { 00:19:15.452 "name": "spare", 00:19:15.452 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:15.452 "is_configured": true, 00:19:15.452 "data_offset": 256, 00:19:15.452 "data_size": 7936 00:19:15.452 }, 00:19:15.452 { 00:19:15.452 "name": "BaseBdev2", 00:19:15.452 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:15.452 "is_configured": true, 00:19:15.452 "data_offset": 256, 00:19:15.452 "data_size": 7936 00:19:15.452 } 00:19:15.452 ] 00:19:15.452 }' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:15.452 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:15.712 "name": "raid_bdev1", 00:19:15.712 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:15.712 "strip_size_kb": 0, 00:19:15.712 "state": "online", 00:19:15.712 "raid_level": "raid1", 00:19:15.712 "superblock": true, 00:19:15.712 "num_base_bdevs": 2, 00:19:15.712 "num_base_bdevs_discovered": 2, 00:19:15.712 "num_base_bdevs_operational": 2, 00:19:15.712 "base_bdevs_list": [ 00:19:15.712 { 00:19:15.712 "name": "spare", 00:19:15.712 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:15.712 "is_configured": true, 00:19:15.712 "data_offset": 256, 00:19:15.712 "data_size": 7936 00:19:15.712 }, 00:19:15.712 { 00:19:15.712 "name": "BaseBdev2", 00:19:15.712 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:15.712 "is_configured": true, 00:19:15.712 "data_offset": 256, 00:19:15.712 "data_size": 7936 00:19:15.712 } 00:19:15.712 ] 00:19:15.712 }' 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:15.712 09:56:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.971 [2024-11-27 09:56:17.071630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.971 [2024-11-27 09:56:17.071675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.971 [2024-11-27 09:56:17.071805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.971 [2024-11-27 09:56:17.071892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.971 [2024-11-27 09:56:17.071907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:15.971 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:16.231 /dev/nbd0 00:19:16.231 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.490 1+0 records in 00:19:16.490 1+0 records out 00:19:16.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446806 s, 9.2 MB/s 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.490 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:16.490 /dev/nbd1 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.748 1+0 records in 00:19:16.748 1+0 records out 00:19:16.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036534 s, 11.2 MB/s 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.748 09:56:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.007 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:17.266 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.267 [2024-11-27 09:56:18.360830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:17.267 [2024-11-27 09:56:18.361023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.267 [2024-11-27 09:56:18.361063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:17.267 [2024-11-27 09:56:18.361073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.267 [2024-11-27 09:56:18.363780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.267 [2024-11-27 09:56:18.363823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:17.267 [2024-11-27 09:56:18.363951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:17.267 [2024-11-27 09:56:18.364092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:17.267 [2024-11-27 09:56:18.364282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:17.267 spare 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.267 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.526 [2024-11-27 09:56:18.464262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:17.526 [2024-11-27 09:56:18.464352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:17.526 [2024-11-27 09:56:18.464800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:17.526 [2024-11-27 09:56:18.465102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:17.526 [2024-11-27 09:56:18.465123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:17.526 [2024-11-27 09:56:18.465375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.526 "name": "raid_bdev1", 00:19:17.526 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:17.526 "strip_size_kb": 0, 00:19:17.526 "state": "online", 00:19:17.526 "raid_level": "raid1", 00:19:17.526 "superblock": true, 00:19:17.526 "num_base_bdevs": 2, 00:19:17.526 "num_base_bdevs_discovered": 2, 00:19:17.526 "num_base_bdevs_operational": 2, 00:19:17.526 "base_bdevs_list": [ 00:19:17.526 { 00:19:17.526 "name": "spare", 00:19:17.526 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:17.526 "is_configured": true, 00:19:17.526 "data_offset": 256, 00:19:17.526 "data_size": 7936 00:19:17.526 }, 00:19:17.526 { 00:19:17.526 "name": "BaseBdev2", 00:19:17.526 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:17.526 "is_configured": true, 00:19:17.526 "data_offset": 256, 00:19:17.526 "data_size": 7936 00:19:17.526 } 00:19:17.526 ] 00:19:17.526 }' 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.526 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.093 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:18.093 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.094 "name": "raid_bdev1", 00:19:18.094 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:18.094 "strip_size_kb": 0, 00:19:18.094 "state": "online", 00:19:18.094 "raid_level": "raid1", 00:19:18.094 "superblock": true, 00:19:18.094 "num_base_bdevs": 2, 00:19:18.094 "num_base_bdevs_discovered": 2, 00:19:18.094 "num_base_bdevs_operational": 2, 00:19:18.094 "base_bdevs_list": [ 00:19:18.094 { 00:19:18.094 "name": "spare", 00:19:18.094 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:18.094 "is_configured": true, 00:19:18.094 "data_offset": 256, 00:19:18.094 "data_size": 7936 00:19:18.094 }, 00:19:18.094 { 00:19:18.094 "name": "BaseBdev2", 00:19:18.094 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:18.094 "is_configured": true, 00:19:18.094 "data_offset": 256, 00:19:18.094 "data_size": 7936 00:19:18.094 } 00:19:18.094 ] 00:19:18.094 }' 00:19:18.094 09:56:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.094 [2024-11-27 09:56:19.144235] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.094 "name": "raid_bdev1", 00:19:18.094 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:18.094 "strip_size_kb": 0, 00:19:18.094 "state": "online", 00:19:18.094 "raid_level": "raid1", 00:19:18.094 "superblock": true, 00:19:18.094 "num_base_bdevs": 2, 00:19:18.094 "num_base_bdevs_discovered": 1, 00:19:18.094 "num_base_bdevs_operational": 1, 00:19:18.094 "base_bdevs_list": [ 00:19:18.094 { 00:19:18.094 "name": null, 00:19:18.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.094 "is_configured": false, 00:19:18.094 "data_offset": 0, 00:19:18.094 "data_size": 7936 00:19:18.094 }, 00:19:18.094 { 00:19:18.094 "name": "BaseBdev2", 00:19:18.094 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:18.094 "is_configured": true, 00:19:18.094 "data_offset": 256, 00:19:18.094 "data_size": 7936 00:19:18.094 } 00:19:18.094 ] 00:19:18.094 }' 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.094 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.659 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:18.659 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.659 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:18.659 [2024-11-27 09:56:19.567556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.659 [2024-11-27 09:56:19.567969] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:18.659 [2024-11-27 09:56:19.568065] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:18.659 [2024-11-27 09:56:19.568136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:18.659 [2024-11-27 09:56:19.586348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:18.659 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.659 09:56:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:18.659 [2024-11-27 09:56:19.588864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.592 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.592 "name": "raid_bdev1", 00:19:19.592 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:19.592 "strip_size_kb": 0, 00:19:19.592 "state": "online", 00:19:19.592 "raid_level": "raid1", 00:19:19.592 "superblock": true, 00:19:19.592 "num_base_bdevs": 2, 00:19:19.592 "num_base_bdevs_discovered": 2, 00:19:19.592 "num_base_bdevs_operational": 2, 00:19:19.592 "process": { 00:19:19.592 "type": "rebuild", 00:19:19.592 "target": "spare", 00:19:19.592 "progress": { 00:19:19.592 "blocks": 2560, 00:19:19.593 "percent": 32 00:19:19.593 } 00:19:19.593 }, 00:19:19.593 "base_bdevs_list": [ 00:19:19.593 { 00:19:19.593 "name": "spare", 00:19:19.593 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:19.593 "is_configured": true, 00:19:19.593 "data_offset": 256, 00:19:19.593 "data_size": 7936 00:19:19.593 }, 00:19:19.593 { 00:19:19.593 "name": "BaseBdev2", 00:19:19.593 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:19.593 "is_configured": true, 00:19:19.593 "data_offset": 256, 00:19:19.593 "data_size": 7936 00:19:19.593 } 00:19:19.593 ] 00:19:19.593 }' 00:19:19.593 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.593 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.593 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.852 [2024-11-27 09:56:20.751732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.852 [2024-11-27 09:56:20.799564] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:19.852 [2024-11-27 09:56:20.799825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.852 [2024-11-27 09:56:20.799847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:19.852 [2024-11-27 09:56:20.799860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.852 "name": "raid_bdev1", 00:19:19.852 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:19.852 "strip_size_kb": 0, 00:19:19.852 "state": "online", 00:19:19.852 "raid_level": "raid1", 00:19:19.852 "superblock": true, 00:19:19.852 "num_base_bdevs": 2, 00:19:19.852 "num_base_bdevs_discovered": 1, 00:19:19.852 "num_base_bdevs_operational": 1, 00:19:19.852 "base_bdevs_list": [ 00:19:19.852 { 00:19:19.852 "name": null, 00:19:19.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.852 "is_configured": false, 00:19:19.852 "data_offset": 0, 00:19:19.852 "data_size": 7936 00:19:19.852 }, 00:19:19.852 { 00:19:19.852 "name": "BaseBdev2", 00:19:19.852 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:19.852 "is_configured": true, 00:19:19.852 "data_offset": 256, 00:19:19.852 "data_size": 7936 00:19:19.852 } 00:19:19.852 ] 00:19:19.852 }' 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.852 09:56:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.419 09:56:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:20.419 09:56:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.419 09:56:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:20.419 [2024-11-27 09:56:21.287277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:20.419 [2024-11-27 09:56:21.287481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.419 [2024-11-27 09:56:21.287526] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:20.419 [2024-11-27 09:56:21.287561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.419 [2024-11-27 09:56:21.288254] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.419 [2024-11-27 09:56:21.288337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:20.419 [2024-11-27 09:56:21.288504] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:20.419 [2024-11-27 09:56:21.288552] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:20.419 [2024-11-27 09:56:21.288670] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:20.419 [2024-11-27 09:56:21.288749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:20.419 [2024-11-27 09:56:21.306313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:20.419 spare 00:19:20.419 09:56:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.419 09:56:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:20.419 [2024-11-27 09:56:21.308818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.355 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.355 "name": "raid_bdev1", 00:19:21.355 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:21.355 "strip_size_kb": 0, 00:19:21.355 "state": "online", 00:19:21.355 "raid_level": "raid1", 00:19:21.355 "superblock": true, 00:19:21.355 "num_base_bdevs": 2, 00:19:21.355 "num_base_bdevs_discovered": 2, 00:19:21.355 "num_base_bdevs_operational": 2, 00:19:21.355 "process": { 00:19:21.355 "type": "rebuild", 00:19:21.355 "target": "spare", 00:19:21.355 "progress": { 00:19:21.355 "blocks": 2560, 00:19:21.355 "percent": 32 00:19:21.355 } 00:19:21.355 }, 00:19:21.355 "base_bdevs_list": [ 00:19:21.355 { 00:19:21.356 "name": "spare", 00:19:21.356 "uuid": "8b06e187-1e92-55d4-bc0f-529329be0034", 00:19:21.356 "is_configured": true, 00:19:21.356 "data_offset": 256, 00:19:21.356 "data_size": 7936 00:19:21.356 }, 00:19:21.356 { 00:19:21.356 "name": "BaseBdev2", 00:19:21.356 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:21.356 "is_configured": true, 00:19:21.356 "data_offset": 256, 00:19:21.356 "data_size": 7936 00:19:21.356 } 00:19:21.356 ] 00:19:21.356 }' 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.356 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.356 [2024-11-27 09:56:22.468756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.615 [2024-11-27 09:56:22.519617] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:21.615 [2024-11-27 09:56:22.519850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.615 [2024-11-27 09:56:22.519877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:21.615 [2024-11-27 09:56:22.519886] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.615 "name": "raid_bdev1", 00:19:21.615 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:21.615 "strip_size_kb": 0, 00:19:21.615 "state": "online", 00:19:21.615 "raid_level": "raid1", 00:19:21.615 "superblock": true, 00:19:21.615 "num_base_bdevs": 2, 00:19:21.615 "num_base_bdevs_discovered": 1, 00:19:21.615 "num_base_bdevs_operational": 1, 00:19:21.615 "base_bdevs_list": [ 00:19:21.615 { 00:19:21.615 "name": null, 00:19:21.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.615 "is_configured": false, 00:19:21.615 "data_offset": 0, 00:19:21.615 "data_size": 7936 00:19:21.615 }, 00:19:21.615 { 00:19:21.615 "name": "BaseBdev2", 00:19:21.615 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:21.615 "is_configured": true, 00:19:21.615 "data_offset": 256, 00:19:21.615 "data_size": 7936 00:19:21.615 } 00:19:21.615 ] 00:19:21.615 }' 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.615 09:56:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.194 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:22.194 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:22.195 "name": "raid_bdev1", 00:19:22.195 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:22.195 "strip_size_kb": 0, 00:19:22.195 "state": "online", 00:19:22.195 "raid_level": "raid1", 00:19:22.195 "superblock": true, 00:19:22.195 "num_base_bdevs": 2, 00:19:22.195 "num_base_bdevs_discovered": 1, 00:19:22.195 "num_base_bdevs_operational": 1, 00:19:22.195 "base_bdevs_list": [ 00:19:22.195 { 00:19:22.195 "name": null, 00:19:22.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.195 "is_configured": false, 00:19:22.195 "data_offset": 0, 00:19:22.195 "data_size": 7936 00:19:22.195 }, 00:19:22.195 { 00:19:22.195 "name": "BaseBdev2", 00:19:22.195 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:22.195 "is_configured": true, 00:19:22.195 "data_offset": 256, 00:19:22.195 "data_size": 7936 00:19:22.195 } 00:19:22.195 ] 00:19:22.195 }' 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:22.195 [2024-11-27 09:56:23.170970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:22.195 [2024-11-27 09:56:23.171079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.195 [2024-11-27 09:56:23.171114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:22.195 [2024-11-27 09:56:23.171140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.195 [2024-11-27 09:56:23.171755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.195 [2024-11-27 09:56:23.171781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:22.195 [2024-11-27 09:56:23.171893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:22.195 [2024-11-27 09:56:23.171911] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:22.195 [2024-11-27 09:56:23.171923] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:22.195 [2024-11-27 09:56:23.171936] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:22.195 BaseBdev1 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.195 09:56:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.147 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.148 "name": "raid_bdev1", 00:19:23.148 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:23.148 "strip_size_kb": 0, 00:19:23.148 "state": "online", 00:19:23.148 "raid_level": "raid1", 00:19:23.148 "superblock": true, 00:19:23.148 "num_base_bdevs": 2, 00:19:23.148 "num_base_bdevs_discovered": 1, 00:19:23.148 "num_base_bdevs_operational": 1, 00:19:23.148 "base_bdevs_list": [ 00:19:23.148 { 00:19:23.148 "name": null, 00:19:23.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.148 "is_configured": false, 00:19:23.148 "data_offset": 0, 00:19:23.148 "data_size": 7936 00:19:23.148 }, 00:19:23.148 { 00:19:23.148 "name": "BaseBdev2", 00:19:23.148 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:23.148 "is_configured": true, 00:19:23.148 "data_offset": 256, 00:19:23.148 "data_size": 7936 00:19:23.148 } 00:19:23.148 ] 00:19:23.148 }' 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.148 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:23.716 "name": "raid_bdev1", 00:19:23.716 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:23.716 "strip_size_kb": 0, 00:19:23.716 "state": "online", 00:19:23.716 "raid_level": "raid1", 00:19:23.716 "superblock": true, 00:19:23.716 "num_base_bdevs": 2, 00:19:23.716 "num_base_bdevs_discovered": 1, 00:19:23.716 "num_base_bdevs_operational": 1, 00:19:23.716 "base_bdevs_list": [ 00:19:23.716 { 00:19:23.716 "name": null, 00:19:23.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.716 "is_configured": false, 00:19:23.716 "data_offset": 0, 00:19:23.716 "data_size": 7936 00:19:23.716 }, 00:19:23.716 { 00:19:23.716 "name": "BaseBdev2", 00:19:23.716 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:23.716 "is_configured": true, 00:19:23.716 "data_offset": 256, 00:19:23.716 "data_size": 7936 00:19:23.716 } 00:19:23.716 ] 00:19:23.716 }' 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.716 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:23.716 [2024-11-27 09:56:24.792246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.716 [2024-11-27 09:56:24.792580] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:23.716 [2024-11-27 09:56:24.792700] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:23.716 request: 00:19:23.716 { 00:19:23.716 "base_bdev": "BaseBdev1", 00:19:23.716 "raid_bdev": "raid_bdev1", 00:19:23.716 "method": "bdev_raid_add_base_bdev", 00:19:23.716 "req_id": 1 00:19:23.716 } 00:19:23.716 Got JSON-RPC error response 00:19:23.716 response: 00:19:23.716 { 00:19:23.716 "code": -22, 00:19:23.716 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:23.716 } 00:19:23.717 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:23.717 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:19:23.717 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:23.717 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:23.717 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:23.717 09:56:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.094 "name": "raid_bdev1", 00:19:25.094 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:25.094 "strip_size_kb": 0, 00:19:25.094 "state": "online", 00:19:25.094 "raid_level": "raid1", 00:19:25.094 "superblock": true, 00:19:25.094 "num_base_bdevs": 2, 00:19:25.094 "num_base_bdevs_discovered": 1, 00:19:25.094 "num_base_bdevs_operational": 1, 00:19:25.094 "base_bdevs_list": [ 00:19:25.094 { 00:19:25.094 "name": null, 00:19:25.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.094 "is_configured": false, 00:19:25.094 "data_offset": 0, 00:19:25.094 "data_size": 7936 00:19:25.094 }, 00:19:25.094 { 00:19:25.094 "name": "BaseBdev2", 00:19:25.094 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:25.094 "is_configured": true, 00:19:25.094 "data_offset": 256, 00:19:25.094 "data_size": 7936 00:19:25.094 } 00:19:25.094 ] 00:19:25.094 }' 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.094 09:56:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.353 "name": "raid_bdev1", 00:19:25.353 "uuid": "aebb28c0-167c-4f11-8d8b-85cefaed8ccf", 00:19:25.353 "strip_size_kb": 0, 00:19:25.353 "state": "online", 00:19:25.353 "raid_level": "raid1", 00:19:25.353 "superblock": true, 00:19:25.353 "num_base_bdevs": 2, 00:19:25.353 "num_base_bdevs_discovered": 1, 00:19:25.353 "num_base_bdevs_operational": 1, 00:19:25.353 "base_bdevs_list": [ 00:19:25.353 { 00:19:25.353 "name": null, 00:19:25.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.353 "is_configured": false, 00:19:25.353 "data_offset": 0, 00:19:25.353 "data_size": 7936 00:19:25.353 }, 00:19:25.353 { 00:19:25.353 "name": "BaseBdev2", 00:19:25.353 "uuid": "c97513c5-b664-5098-b6d9-11ccdce57382", 00:19:25.353 "is_configured": true, 00:19:25.353 "data_offset": 256, 00:19:25.353 "data_size": 7936 00:19:25.353 } 00:19:25.353 ] 00:19:25.353 }' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86831 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86831 ']' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86831 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86831 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86831' 00:19:25.353 killing process with pid 86831 00:19:25.353 Received shutdown signal, test time was about 60.000000 seconds 00:19:25.353 00:19:25.353 Latency(us) 00:19:25.353 [2024-11-27T09:56:26.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.353 [2024-11-27T09:56:26.486Z] =================================================================================================================== 00:19:25.353 [2024-11-27T09:56:26.486Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86831 00:19:25.353 [2024-11-27 09:56:26.463313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:25.353 09:56:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86831 00:19:25.354 [2024-11-27 09:56:26.463507] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:25.354 [2024-11-27 09:56:26.463567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:25.354 [2024-11-27 09:56:26.463580] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:25.922 [2024-11-27 09:56:26.790157] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.299 ************************************ 00:19:27.299 END TEST raid_rebuild_test_sb_4k 00:19:27.299 ************************************ 00:19:27.299 09:56:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:19:27.299 00:19:27.299 real 0m20.373s 00:19:27.299 user 0m26.379s 00:19:27.299 sys 0m2.990s 00:19:27.299 09:56:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.300 09:56:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:27.300 09:56:28 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:19:27.300 09:56:28 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:19:27.300 09:56:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:27.300 09:56:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.300 09:56:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.300 ************************************ 00:19:27.300 START TEST raid_state_function_test_sb_md_separate 00:19:27.300 ************************************ 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:27.300 Process raid pid: 87528 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87528 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87528' 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:27.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87528 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87528 ']' 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.300 09:56:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:27.300 [2024-11-27 09:56:28.192852] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:19:27.300 [2024-11-27 09:56:28.193147] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.300 [2024-11-27 09:56:28.377582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.559 [2024-11-27 09:56:28.523404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.818 [2024-11-27 09:56:28.770827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.818 [2024-11-27 09:56:28.771037] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.078 [2024-11-27 09:56:29.041820] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.078 [2024-11-27 09:56:29.041967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.078 [2024-11-27 09:56:29.042011] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.078 [2024-11-27 09:56:29.042038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.078 "name": "Existed_Raid", 00:19:28.078 "uuid": "870d3b0e-0bb3-4221-a8e0-3f1f2f5adce8", 00:19:28.078 "strip_size_kb": 0, 00:19:28.078 "state": "configuring", 00:19:28.078 "raid_level": "raid1", 00:19:28.078 "superblock": true, 00:19:28.078 "num_base_bdevs": 2, 00:19:28.078 "num_base_bdevs_discovered": 0, 00:19:28.078 "num_base_bdevs_operational": 2, 00:19:28.078 "base_bdevs_list": [ 00:19:28.078 { 00:19:28.078 "name": "BaseBdev1", 00:19:28.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.078 "is_configured": false, 00:19:28.078 "data_offset": 0, 00:19:28.078 "data_size": 0 00:19:28.078 }, 00:19:28.078 { 00:19:28.078 "name": "BaseBdev2", 00:19:28.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.078 "is_configured": false, 00:19:28.078 "data_offset": 0, 00:19:28.078 "data_size": 0 00:19:28.078 } 00:19:28.078 ] 00:19:28.078 }' 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.078 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 [2024-11-27 09:56:29.512930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:28.646 [2024-11-27 09:56:29.512979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 [2024-11-27 09:56:29.524915] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.646 [2024-11-27 09:56:29.525074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.646 [2024-11-27 09:56:29.525111] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.646 [2024-11-27 09:56:29.525140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 [2024-11-27 09:56:29.581986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:28.646 BaseBdev1 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 [ 00:19:28.646 { 00:19:28.646 "name": "BaseBdev1", 00:19:28.646 "aliases": [ 00:19:28.646 "5b3a0ea5-f1ca-4753-a2c1-dff1c7994b4d" 00:19:28.646 ], 00:19:28.646 "product_name": "Malloc disk", 00:19:28.646 "block_size": 4096, 00:19:28.646 "num_blocks": 8192, 00:19:28.646 "uuid": "5b3a0ea5-f1ca-4753-a2c1-dff1c7994b4d", 00:19:28.646 "md_size": 32, 00:19:28.646 "md_interleave": false, 00:19:28.646 "dif_type": 0, 00:19:28.646 "assigned_rate_limits": { 00:19:28.646 "rw_ios_per_sec": 0, 00:19:28.646 "rw_mbytes_per_sec": 0, 00:19:28.646 "r_mbytes_per_sec": 0, 00:19:28.646 "w_mbytes_per_sec": 0 00:19:28.646 }, 00:19:28.646 "claimed": true, 00:19:28.646 "claim_type": "exclusive_write", 00:19:28.646 "zoned": false, 00:19:28.646 "supported_io_types": { 00:19:28.646 "read": true, 00:19:28.646 "write": true, 00:19:28.646 "unmap": true, 00:19:28.646 "flush": true, 00:19:28.646 "reset": true, 00:19:28.646 "nvme_admin": false, 00:19:28.646 "nvme_io": false, 00:19:28.646 "nvme_io_md": false, 00:19:28.646 "write_zeroes": true, 00:19:28.646 "zcopy": true, 00:19:28.646 "get_zone_info": false, 00:19:28.646 "zone_management": false, 00:19:28.646 "zone_append": false, 00:19:28.646 "compare": false, 00:19:28.646 "compare_and_write": false, 00:19:28.646 "abort": true, 00:19:28.646 "seek_hole": false, 00:19:28.646 "seek_data": false, 00:19:28.646 "copy": true, 00:19:28.646 "nvme_iov_md": false 00:19:28.646 }, 00:19:28.646 "memory_domains": [ 00:19:28.646 { 00:19:28.646 "dma_device_id": "system", 00:19:28.646 "dma_device_type": 1 00:19:28.646 }, 00:19:28.646 { 00:19:28.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.646 "dma_device_type": 2 00:19:28.646 } 00:19:28.646 ], 00:19:28.646 "driver_specific": {} 00:19:28.646 } 00:19:28.646 ] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.646 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.646 "name": "Existed_Raid", 00:19:28.646 "uuid": "91deff6d-60fd-4e35-adce-794a4ebbfa49", 00:19:28.646 "strip_size_kb": 0, 00:19:28.646 "state": "configuring", 00:19:28.646 "raid_level": "raid1", 00:19:28.646 "superblock": true, 00:19:28.646 "num_base_bdevs": 2, 00:19:28.646 "num_base_bdevs_discovered": 1, 00:19:28.647 "num_base_bdevs_operational": 2, 00:19:28.647 "base_bdevs_list": [ 00:19:28.647 { 00:19:28.647 "name": "BaseBdev1", 00:19:28.647 "uuid": "5b3a0ea5-f1ca-4753-a2c1-dff1c7994b4d", 00:19:28.647 "is_configured": true, 00:19:28.647 "data_offset": 256, 00:19:28.647 "data_size": 7936 00:19:28.647 }, 00:19:28.647 { 00:19:28.647 "name": "BaseBdev2", 00:19:28.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.647 "is_configured": false, 00:19:28.647 "data_offset": 0, 00:19:28.647 "data_size": 0 00:19:28.647 } 00:19:28.647 ] 00:19:28.647 }' 00:19:28.647 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.647 09:56:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.215 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 [2024-11-27 09:56:30.077261] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:29.216 [2024-11-27 09:56:30.077335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 [2024-11-27 09:56:30.089304] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:29.216 [2024-11-27 09:56:30.091589] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:29.216 [2024-11-27 09:56:30.091647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.216 "name": "Existed_Raid", 00:19:29.216 "uuid": "e35de5f7-12b1-43fe-85b1-72d7918c9e08", 00:19:29.216 "strip_size_kb": 0, 00:19:29.216 "state": "configuring", 00:19:29.216 "raid_level": "raid1", 00:19:29.216 "superblock": true, 00:19:29.216 "num_base_bdevs": 2, 00:19:29.216 "num_base_bdevs_discovered": 1, 00:19:29.216 "num_base_bdevs_operational": 2, 00:19:29.216 "base_bdevs_list": [ 00:19:29.216 { 00:19:29.216 "name": "BaseBdev1", 00:19:29.216 "uuid": "5b3a0ea5-f1ca-4753-a2c1-dff1c7994b4d", 00:19:29.216 "is_configured": true, 00:19:29.216 "data_offset": 256, 00:19:29.216 "data_size": 7936 00:19:29.216 }, 00:19:29.216 { 00:19:29.216 "name": "BaseBdev2", 00:19:29.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.216 "is_configured": false, 00:19:29.216 "data_offset": 0, 00:19:29.216 "data_size": 0 00:19:29.216 } 00:19:29.216 ] 00:19:29.216 }' 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.216 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.476 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:19:29.476 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.476 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.736 [2024-11-27 09:56:30.614678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:29.736 [2024-11-27 09:56:30.615142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:29.736 [2024-11-27 09:56:30.615211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:29.736 [2024-11-27 09:56:30.615345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:29.736 [2024-11-27 09:56:30.615556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:29.736 [2024-11-27 09:56:30.615605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:29.736 [2024-11-27 09:56:30.615745] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.736 BaseBdev2 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.736 [ 00:19:29.736 { 00:19:29.736 "name": "BaseBdev2", 00:19:29.736 "aliases": [ 00:19:29.736 "28c61078-c4b4-44d5-8a37-b84e1855deb8" 00:19:29.736 ], 00:19:29.736 "product_name": "Malloc disk", 00:19:29.736 "block_size": 4096, 00:19:29.736 "num_blocks": 8192, 00:19:29.736 "uuid": "28c61078-c4b4-44d5-8a37-b84e1855deb8", 00:19:29.736 "md_size": 32, 00:19:29.736 "md_interleave": false, 00:19:29.736 "dif_type": 0, 00:19:29.736 "assigned_rate_limits": { 00:19:29.736 "rw_ios_per_sec": 0, 00:19:29.736 "rw_mbytes_per_sec": 0, 00:19:29.736 "r_mbytes_per_sec": 0, 00:19:29.736 "w_mbytes_per_sec": 0 00:19:29.736 }, 00:19:29.736 "claimed": true, 00:19:29.736 "claim_type": "exclusive_write", 00:19:29.736 "zoned": false, 00:19:29.736 "supported_io_types": { 00:19:29.736 "read": true, 00:19:29.736 "write": true, 00:19:29.736 "unmap": true, 00:19:29.736 "flush": true, 00:19:29.736 "reset": true, 00:19:29.736 "nvme_admin": false, 00:19:29.736 "nvme_io": false, 00:19:29.736 "nvme_io_md": false, 00:19:29.736 "write_zeroes": true, 00:19:29.736 "zcopy": true, 00:19:29.736 "get_zone_info": false, 00:19:29.736 "zone_management": false, 00:19:29.736 "zone_append": false, 00:19:29.736 "compare": false, 00:19:29.736 "compare_and_write": false, 00:19:29.736 "abort": true, 00:19:29.736 "seek_hole": false, 00:19:29.736 "seek_data": false, 00:19:29.736 "copy": true, 00:19:29.736 "nvme_iov_md": false 00:19:29.736 }, 00:19:29.736 "memory_domains": [ 00:19:29.736 { 00:19:29.736 "dma_device_id": "system", 00:19:29.736 "dma_device_type": 1 00:19:29.736 }, 00:19:29.736 { 00:19:29.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:29.736 "dma_device_type": 2 00:19:29.736 } 00:19:29.736 ], 00:19:29.736 "driver_specific": {} 00:19:29.736 } 00:19:29.736 ] 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.736 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.736 "name": "Existed_Raid", 00:19:29.736 "uuid": "e35de5f7-12b1-43fe-85b1-72d7918c9e08", 00:19:29.736 "strip_size_kb": 0, 00:19:29.736 "state": "online", 00:19:29.736 "raid_level": "raid1", 00:19:29.736 "superblock": true, 00:19:29.736 "num_base_bdevs": 2, 00:19:29.736 "num_base_bdevs_discovered": 2, 00:19:29.737 "num_base_bdevs_operational": 2, 00:19:29.737 "base_bdevs_list": [ 00:19:29.737 { 00:19:29.737 "name": "BaseBdev1", 00:19:29.737 "uuid": "5b3a0ea5-f1ca-4753-a2c1-dff1c7994b4d", 00:19:29.737 "is_configured": true, 00:19:29.737 "data_offset": 256, 00:19:29.737 "data_size": 7936 00:19:29.737 }, 00:19:29.737 { 00:19:29.737 "name": "BaseBdev2", 00:19:29.737 "uuid": "28c61078-c4b4-44d5-8a37-b84e1855deb8", 00:19:29.737 "is_configured": true, 00:19:29.737 "data_offset": 256, 00:19:29.737 "data_size": 7936 00:19:29.737 } 00:19:29.737 ] 00:19:29.737 }' 00:19:29.737 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.737 09:56:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.305 [2024-11-27 09:56:31.158235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.305 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:30.305 "name": "Existed_Raid", 00:19:30.305 "aliases": [ 00:19:30.305 "e35de5f7-12b1-43fe-85b1-72d7918c9e08" 00:19:30.305 ], 00:19:30.305 "product_name": "Raid Volume", 00:19:30.305 "block_size": 4096, 00:19:30.305 "num_blocks": 7936, 00:19:30.305 "uuid": "e35de5f7-12b1-43fe-85b1-72d7918c9e08", 00:19:30.305 "md_size": 32, 00:19:30.305 "md_interleave": false, 00:19:30.305 "dif_type": 0, 00:19:30.305 "assigned_rate_limits": { 00:19:30.305 "rw_ios_per_sec": 0, 00:19:30.305 "rw_mbytes_per_sec": 0, 00:19:30.305 "r_mbytes_per_sec": 0, 00:19:30.305 "w_mbytes_per_sec": 0 00:19:30.305 }, 00:19:30.305 "claimed": false, 00:19:30.305 "zoned": false, 00:19:30.306 "supported_io_types": { 00:19:30.306 "read": true, 00:19:30.306 "write": true, 00:19:30.306 "unmap": false, 00:19:30.306 "flush": false, 00:19:30.306 "reset": true, 00:19:30.306 "nvme_admin": false, 00:19:30.306 "nvme_io": false, 00:19:30.306 "nvme_io_md": false, 00:19:30.306 "write_zeroes": true, 00:19:30.306 "zcopy": false, 00:19:30.306 "get_zone_info": false, 00:19:30.306 "zone_management": false, 00:19:30.306 "zone_append": false, 00:19:30.306 "compare": false, 00:19:30.306 "compare_and_write": false, 00:19:30.306 "abort": false, 00:19:30.306 "seek_hole": false, 00:19:30.306 "seek_data": false, 00:19:30.306 "copy": false, 00:19:30.306 "nvme_iov_md": false 00:19:30.306 }, 00:19:30.306 "memory_domains": [ 00:19:30.306 { 00:19:30.306 "dma_device_id": "system", 00:19:30.306 "dma_device_type": 1 00:19:30.306 }, 00:19:30.306 { 00:19:30.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.306 "dma_device_type": 2 00:19:30.306 }, 00:19:30.306 { 00:19:30.306 "dma_device_id": "system", 00:19:30.306 "dma_device_type": 1 00:19:30.306 }, 00:19:30.306 { 00:19:30.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.306 "dma_device_type": 2 00:19:30.306 } 00:19:30.306 ], 00:19:30.306 "driver_specific": { 00:19:30.306 "raid": { 00:19:30.306 "uuid": "e35de5f7-12b1-43fe-85b1-72d7918c9e08", 00:19:30.306 "strip_size_kb": 0, 00:19:30.306 "state": "online", 00:19:30.306 "raid_level": "raid1", 00:19:30.306 "superblock": true, 00:19:30.306 "num_base_bdevs": 2, 00:19:30.306 "num_base_bdevs_discovered": 2, 00:19:30.306 "num_base_bdevs_operational": 2, 00:19:30.306 "base_bdevs_list": [ 00:19:30.306 { 00:19:30.306 "name": "BaseBdev1", 00:19:30.306 "uuid": "5b3a0ea5-f1ca-4753-a2c1-dff1c7994b4d", 00:19:30.306 "is_configured": true, 00:19:30.306 "data_offset": 256, 00:19:30.306 "data_size": 7936 00:19:30.306 }, 00:19:30.306 { 00:19:30.306 "name": "BaseBdev2", 00:19:30.306 "uuid": "28c61078-c4b4-44d5-8a37-b84e1855deb8", 00:19:30.306 "is_configured": true, 00:19:30.306 "data_offset": 256, 00:19:30.306 "data_size": 7936 00:19:30.306 } 00:19:30.306 ] 00:19:30.306 } 00:19:30.306 } 00:19:30.306 }' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:30.306 BaseBdev2' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.306 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.306 [2024-11-27 09:56:31.377564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.565 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.566 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:30.566 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.566 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.566 "name": "Existed_Raid", 00:19:30.566 "uuid": "e35de5f7-12b1-43fe-85b1-72d7918c9e08", 00:19:30.566 "strip_size_kb": 0, 00:19:30.566 "state": "online", 00:19:30.566 "raid_level": "raid1", 00:19:30.566 "superblock": true, 00:19:30.566 "num_base_bdevs": 2, 00:19:30.566 "num_base_bdevs_discovered": 1, 00:19:30.566 "num_base_bdevs_operational": 1, 00:19:30.566 "base_bdevs_list": [ 00:19:30.566 { 00:19:30.566 "name": null, 00:19:30.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.566 "is_configured": false, 00:19:30.566 "data_offset": 0, 00:19:30.566 "data_size": 7936 00:19:30.566 }, 00:19:30.566 { 00:19:30.566 "name": "BaseBdev2", 00:19:30.566 "uuid": "28c61078-c4b4-44d5-8a37-b84e1855deb8", 00:19:30.566 "is_configured": true, 00:19:30.566 "data_offset": 256, 00:19:30.566 "data_size": 7936 00:19:30.566 } 00:19:30.566 ] 00:19:30.566 }' 00:19:30.566 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.566 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.135 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:31.135 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:31.135 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.135 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:31.135 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.135 09:56:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.135 [2024-11-27 09:56:32.034937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:31.135 [2024-11-27 09:56:32.035086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:31.135 [2024-11-27 09:56:32.148855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:31.135 [2024-11-27 09:56:32.148921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:31.135 [2024-11-27 09:56:32.148935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87528 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87528 ']' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87528 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87528 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87528' 00:19:31.135 killing process with pid 87528 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87528 00:19:31.135 [2024-11-27 09:56:32.236541] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:31.135 09:56:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87528 00:19:31.135 [2024-11-27 09:56:32.254832] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:32.513 09:56:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:19:32.513 00:19:32.513 real 0m5.401s 00:19:32.513 user 0m7.603s 00:19:32.513 sys 0m1.065s 00:19:32.513 09:56:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.513 ************************************ 00:19:32.513 END TEST raid_state_function_test_sb_md_separate 00:19:32.513 ************************************ 00:19:32.513 09:56:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.513 09:56:33 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:19:32.513 09:56:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:32.513 09:56:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.513 09:56:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:32.513 ************************************ 00:19:32.513 START TEST raid_superblock_test_md_separate 00:19:32.513 ************************************ 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87781 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87781 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87781 ']' 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.513 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.514 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.514 09:56:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:32.773 [2024-11-27 09:56:33.661064] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:19:32.773 [2024-11-27 09:56:33.661324] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87781 ] 00:19:32.773 [2024-11-27 09:56:33.840525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.032 [2024-11-27 09:56:33.984148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.292 [2024-11-27 09:56:34.226479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.292 [2024-11-27 09:56:34.226570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:33.551 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.551 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:33.551 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:33.551 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.552 malloc1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.552 [2024-11-27 09:56:34.566937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:33.552 [2024-11-27 09:56:34.567113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.552 [2024-11-27 09:56:34.567163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:33.552 [2024-11-27 09:56:34.567194] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.552 [2024-11-27 09:56:34.569666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.552 [2024-11-27 09:56:34.569767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:33.552 pt1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.552 malloc2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.552 [2024-11-27 09:56:34.635428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:33.552 [2024-11-27 09:56:34.635583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.552 [2024-11-27 09:56:34.635629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:33.552 [2024-11-27 09:56:34.635661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.552 [2024-11-27 09:56:34.638219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.552 [2024-11-27 09:56:34.638316] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:33.552 pt2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.552 [2024-11-27 09:56:34.647443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:33.552 [2024-11-27 09:56:34.649767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:33.552 [2024-11-27 09:56:34.649987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:33.552 [2024-11-27 09:56:34.650013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:33.552 [2024-11-27 09:56:34.650122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:33.552 [2024-11-27 09:56:34.650267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:33.552 [2024-11-27 09:56:34.650279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:33.552 [2024-11-27 09:56:34.650401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:33.552 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.812 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:33.812 "name": "raid_bdev1", 00:19:33.812 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:33.812 "strip_size_kb": 0, 00:19:33.812 "state": "online", 00:19:33.812 "raid_level": "raid1", 00:19:33.812 "superblock": true, 00:19:33.812 "num_base_bdevs": 2, 00:19:33.812 "num_base_bdevs_discovered": 2, 00:19:33.812 "num_base_bdevs_operational": 2, 00:19:33.812 "base_bdevs_list": [ 00:19:33.812 { 00:19:33.812 "name": "pt1", 00:19:33.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:33.812 "is_configured": true, 00:19:33.812 "data_offset": 256, 00:19:33.812 "data_size": 7936 00:19:33.812 }, 00:19:33.812 { 00:19:33.812 "name": "pt2", 00:19:33.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:33.812 "is_configured": true, 00:19:33.812 "data_offset": 256, 00:19:33.812 "data_size": 7936 00:19:33.812 } 00:19:33.812 ] 00:19:33.812 }' 00:19:33.812 09:56:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:33.812 09:56:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.071 [2024-11-27 09:56:35.134955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:34.071 "name": "raid_bdev1", 00:19:34.071 "aliases": [ 00:19:34.071 "49e370eb-2759-46d5-81b2-4a22a939aab4" 00:19:34.071 ], 00:19:34.071 "product_name": "Raid Volume", 00:19:34.071 "block_size": 4096, 00:19:34.071 "num_blocks": 7936, 00:19:34.071 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:34.071 "md_size": 32, 00:19:34.071 "md_interleave": false, 00:19:34.071 "dif_type": 0, 00:19:34.071 "assigned_rate_limits": { 00:19:34.071 "rw_ios_per_sec": 0, 00:19:34.071 "rw_mbytes_per_sec": 0, 00:19:34.071 "r_mbytes_per_sec": 0, 00:19:34.071 "w_mbytes_per_sec": 0 00:19:34.071 }, 00:19:34.071 "claimed": false, 00:19:34.071 "zoned": false, 00:19:34.071 "supported_io_types": { 00:19:34.071 "read": true, 00:19:34.071 "write": true, 00:19:34.071 "unmap": false, 00:19:34.071 "flush": false, 00:19:34.071 "reset": true, 00:19:34.071 "nvme_admin": false, 00:19:34.071 "nvme_io": false, 00:19:34.071 "nvme_io_md": false, 00:19:34.071 "write_zeroes": true, 00:19:34.071 "zcopy": false, 00:19:34.071 "get_zone_info": false, 00:19:34.071 "zone_management": false, 00:19:34.071 "zone_append": false, 00:19:34.071 "compare": false, 00:19:34.071 "compare_and_write": false, 00:19:34.071 "abort": false, 00:19:34.071 "seek_hole": false, 00:19:34.071 "seek_data": false, 00:19:34.071 "copy": false, 00:19:34.071 "nvme_iov_md": false 00:19:34.071 }, 00:19:34.071 "memory_domains": [ 00:19:34.071 { 00:19:34.071 "dma_device_id": "system", 00:19:34.071 "dma_device_type": 1 00:19:34.071 }, 00:19:34.071 { 00:19:34.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.071 "dma_device_type": 2 00:19:34.071 }, 00:19:34.071 { 00:19:34.071 "dma_device_id": "system", 00:19:34.071 "dma_device_type": 1 00:19:34.071 }, 00:19:34.071 { 00:19:34.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.071 "dma_device_type": 2 00:19:34.071 } 00:19:34.071 ], 00:19:34.071 "driver_specific": { 00:19:34.071 "raid": { 00:19:34.071 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:34.071 "strip_size_kb": 0, 00:19:34.071 "state": "online", 00:19:34.071 "raid_level": "raid1", 00:19:34.071 "superblock": true, 00:19:34.071 "num_base_bdevs": 2, 00:19:34.071 "num_base_bdevs_discovered": 2, 00:19:34.071 "num_base_bdevs_operational": 2, 00:19:34.071 "base_bdevs_list": [ 00:19:34.071 { 00:19:34.071 "name": "pt1", 00:19:34.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.071 "is_configured": true, 00:19:34.071 "data_offset": 256, 00:19:34.071 "data_size": 7936 00:19:34.071 }, 00:19:34.071 { 00:19:34.071 "name": "pt2", 00:19:34.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.071 "is_configured": true, 00:19:34.071 "data_offset": 256, 00:19:34.071 "data_size": 7936 00:19:34.071 } 00:19:34.071 ] 00:19:34.071 } 00:19:34.071 } 00:19:34.071 }' 00:19:34.071 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:34.330 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:34.330 pt2' 00:19:34.330 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.330 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:34.331 [2024-11-27 09:56:35.374503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49e370eb-2759-46d5-81b2-4a22a939aab4 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 49e370eb-2759-46d5-81b2-4a22a939aab4 ']' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 [2024-11-27 09:56:35.418131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.331 [2024-11-27 09:56:35.418239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:34.331 [2024-11-27 09:56:35.418377] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:34.331 [2024-11-27 09:56:35.418447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:34.331 [2024-11-27 09:56:35.418461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.590 [2024-11-27 09:56:35.565908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:34.590 [2024-11-27 09:56:35.568363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:34.590 [2024-11-27 09:56:35.568540] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:34.590 [2024-11-27 09:56:35.568662] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:34.590 [2024-11-27 09:56:35.568720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:34.590 [2024-11-27 09:56:35.568754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:34.590 request: 00:19:34.590 { 00:19:34.590 "name": "raid_bdev1", 00:19:34.590 "raid_level": "raid1", 00:19:34.590 "base_bdevs": [ 00:19:34.590 "malloc1", 00:19:34.590 "malloc2" 00:19:34.590 ], 00:19:34.590 "superblock": false, 00:19:34.590 "method": "bdev_raid_create", 00:19:34.590 "req_id": 1 00:19:34.590 } 00:19:34.590 Got JSON-RPC error response 00:19:34.590 response: 00:19:34.590 { 00:19:34.590 "code": -17, 00:19:34.590 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:34.590 } 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.590 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.591 [2024-11-27 09:56:35.633758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.591 [2024-11-27 09:56:35.633910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.591 [2024-11-27 09:56:35.633948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:34.591 [2024-11-27 09:56:35.634020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.591 [2024-11-27 09:56:35.636666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.591 [2024-11-27 09:56:35.636721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.591 [2024-11-27 09:56:35.636802] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:34.591 [2024-11-27 09:56:35.636873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.591 pt1 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.591 "name": "raid_bdev1", 00:19:34.591 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:34.591 "strip_size_kb": 0, 00:19:34.591 "state": "configuring", 00:19:34.591 "raid_level": "raid1", 00:19:34.591 "superblock": true, 00:19:34.591 "num_base_bdevs": 2, 00:19:34.591 "num_base_bdevs_discovered": 1, 00:19:34.591 "num_base_bdevs_operational": 2, 00:19:34.591 "base_bdevs_list": [ 00:19:34.591 { 00:19:34.591 "name": "pt1", 00:19:34.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:34.591 "is_configured": true, 00:19:34.591 "data_offset": 256, 00:19:34.591 "data_size": 7936 00:19:34.591 }, 00:19:34.591 { 00:19:34.591 "name": null, 00:19:34.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:34.591 "is_configured": false, 00:19:34.591 "data_offset": 256, 00:19:34.591 "data_size": 7936 00:19:34.591 } 00:19:34.591 ] 00:19:34.591 }' 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.591 09:56:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.160 [2024-11-27 09:56:36.104940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:35.160 [2024-11-27 09:56:36.105121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.160 [2024-11-27 09:56:36.105170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:35.160 [2024-11-27 09:56:36.105221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.160 [2024-11-27 09:56:36.105561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.160 [2024-11-27 09:56:36.105622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:35.160 [2024-11-27 09:56:36.105728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:35.160 [2024-11-27 09:56:36.105764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:35.160 [2024-11-27 09:56:36.105918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:35.160 [2024-11-27 09:56:36.105931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:35.160 [2024-11-27 09:56:36.106046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:35.160 [2024-11-27 09:56:36.106184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:35.160 [2024-11-27 09:56:36.106193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:35.160 [2024-11-27 09:56:36.106304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:35.160 pt2 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.160 "name": "raid_bdev1", 00:19:35.160 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:35.160 "strip_size_kb": 0, 00:19:35.160 "state": "online", 00:19:35.160 "raid_level": "raid1", 00:19:35.160 "superblock": true, 00:19:35.160 "num_base_bdevs": 2, 00:19:35.160 "num_base_bdevs_discovered": 2, 00:19:35.160 "num_base_bdevs_operational": 2, 00:19:35.160 "base_bdevs_list": [ 00:19:35.160 { 00:19:35.160 "name": "pt1", 00:19:35.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.160 "is_configured": true, 00:19:35.160 "data_offset": 256, 00:19:35.160 "data_size": 7936 00:19:35.160 }, 00:19:35.160 { 00:19:35.160 "name": "pt2", 00:19:35.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.160 "is_configured": true, 00:19:35.160 "data_offset": 256, 00:19:35.160 "data_size": 7936 00:19:35.160 } 00:19:35.160 ] 00:19:35.160 }' 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.160 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.730 [2024-11-27 09:56:36.572476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:35.730 "name": "raid_bdev1", 00:19:35.730 "aliases": [ 00:19:35.730 "49e370eb-2759-46d5-81b2-4a22a939aab4" 00:19:35.730 ], 00:19:35.730 "product_name": "Raid Volume", 00:19:35.730 "block_size": 4096, 00:19:35.730 "num_blocks": 7936, 00:19:35.730 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:35.730 "md_size": 32, 00:19:35.730 "md_interleave": false, 00:19:35.730 "dif_type": 0, 00:19:35.730 "assigned_rate_limits": { 00:19:35.730 "rw_ios_per_sec": 0, 00:19:35.730 "rw_mbytes_per_sec": 0, 00:19:35.730 "r_mbytes_per_sec": 0, 00:19:35.730 "w_mbytes_per_sec": 0 00:19:35.730 }, 00:19:35.730 "claimed": false, 00:19:35.730 "zoned": false, 00:19:35.730 "supported_io_types": { 00:19:35.730 "read": true, 00:19:35.730 "write": true, 00:19:35.730 "unmap": false, 00:19:35.730 "flush": false, 00:19:35.730 "reset": true, 00:19:35.730 "nvme_admin": false, 00:19:35.730 "nvme_io": false, 00:19:35.730 "nvme_io_md": false, 00:19:35.730 "write_zeroes": true, 00:19:35.730 "zcopy": false, 00:19:35.730 "get_zone_info": false, 00:19:35.730 "zone_management": false, 00:19:35.730 "zone_append": false, 00:19:35.730 "compare": false, 00:19:35.730 "compare_and_write": false, 00:19:35.730 "abort": false, 00:19:35.730 "seek_hole": false, 00:19:35.730 "seek_data": false, 00:19:35.730 "copy": false, 00:19:35.730 "nvme_iov_md": false 00:19:35.730 }, 00:19:35.730 "memory_domains": [ 00:19:35.730 { 00:19:35.730 "dma_device_id": "system", 00:19:35.730 "dma_device_type": 1 00:19:35.730 }, 00:19:35.730 { 00:19:35.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.730 "dma_device_type": 2 00:19:35.730 }, 00:19:35.730 { 00:19:35.730 "dma_device_id": "system", 00:19:35.730 "dma_device_type": 1 00:19:35.730 }, 00:19:35.730 { 00:19:35.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.730 "dma_device_type": 2 00:19:35.730 } 00:19:35.730 ], 00:19:35.730 "driver_specific": { 00:19:35.730 "raid": { 00:19:35.730 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:35.730 "strip_size_kb": 0, 00:19:35.730 "state": "online", 00:19:35.730 "raid_level": "raid1", 00:19:35.730 "superblock": true, 00:19:35.730 "num_base_bdevs": 2, 00:19:35.730 "num_base_bdevs_discovered": 2, 00:19:35.730 "num_base_bdevs_operational": 2, 00:19:35.730 "base_bdevs_list": [ 00:19:35.730 { 00:19:35.730 "name": "pt1", 00:19:35.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:35.730 "is_configured": true, 00:19:35.730 "data_offset": 256, 00:19:35.730 "data_size": 7936 00:19:35.730 }, 00:19:35.730 { 00:19:35.730 "name": "pt2", 00:19:35.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:35.730 "is_configured": true, 00:19:35.730 "data_offset": 256, 00:19:35.730 "data_size": 7936 00:19:35.730 } 00:19:35.730 ] 00:19:35.730 } 00:19:35.730 } 00:19:35.730 }' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:35.730 pt2' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:35.730 [2024-11-27 09:56:36.820108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 49e370eb-2759-46d5-81b2-4a22a939aab4 '!=' 49e370eb-2759-46d5-81b2-4a22a939aab4 ']' 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.730 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.001 [2024-11-27 09:56:36.867764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.001 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.001 "name": "raid_bdev1", 00:19:36.001 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:36.001 "strip_size_kb": 0, 00:19:36.001 "state": "online", 00:19:36.001 "raid_level": "raid1", 00:19:36.001 "superblock": true, 00:19:36.001 "num_base_bdevs": 2, 00:19:36.001 "num_base_bdevs_discovered": 1, 00:19:36.001 "num_base_bdevs_operational": 1, 00:19:36.001 "base_bdevs_list": [ 00:19:36.001 { 00:19:36.001 "name": null, 00:19:36.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.001 "is_configured": false, 00:19:36.002 "data_offset": 0, 00:19:36.002 "data_size": 7936 00:19:36.002 }, 00:19:36.002 { 00:19:36.002 "name": "pt2", 00:19:36.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.002 "is_configured": true, 00:19:36.002 "data_offset": 256, 00:19:36.002 "data_size": 7936 00:19:36.002 } 00:19:36.002 ] 00:19:36.002 }' 00:19:36.002 09:56:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.002 09:56:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.281 [2024-11-27 09:56:37.318899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.281 [2024-11-27 09:56:37.318938] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.281 [2024-11-27 09:56:37.319060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.281 [2024-11-27 09:56:37.319118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.281 [2024-11-27 09:56:37.319132] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:36.281 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.282 [2024-11-27 09:56:37.394775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:36.282 [2024-11-27 09:56:37.394858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.282 [2024-11-27 09:56:37.394878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:36.282 [2024-11-27 09:56:37.394890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.282 [2024-11-27 09:56:37.397431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.282 [2024-11-27 09:56:37.397536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:36.282 [2024-11-27 09:56:37.397622] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:36.282 [2024-11-27 09:56:37.397683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:36.282 [2024-11-27 09:56:37.397805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:36.282 [2024-11-27 09:56:37.397818] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.282 [2024-11-27 09:56:37.397918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:36.282 [2024-11-27 09:56:37.398064] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:36.282 [2024-11-27 09:56:37.398073] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:36.282 [2024-11-27 09:56:37.398180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.282 pt2 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.282 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.541 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.541 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.541 "name": "raid_bdev1", 00:19:36.541 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:36.541 "strip_size_kb": 0, 00:19:36.541 "state": "online", 00:19:36.541 "raid_level": "raid1", 00:19:36.541 "superblock": true, 00:19:36.541 "num_base_bdevs": 2, 00:19:36.541 "num_base_bdevs_discovered": 1, 00:19:36.541 "num_base_bdevs_operational": 1, 00:19:36.541 "base_bdevs_list": [ 00:19:36.541 { 00:19:36.541 "name": null, 00:19:36.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.541 "is_configured": false, 00:19:36.541 "data_offset": 256, 00:19:36.541 "data_size": 7936 00:19:36.541 }, 00:19:36.541 { 00:19:36.541 "name": "pt2", 00:19:36.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:36.541 "is_configured": true, 00:19:36.541 "data_offset": 256, 00:19:36.541 "data_size": 7936 00:19:36.541 } 00:19:36.541 ] 00:19:36.541 }' 00:19:36.541 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.541 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.800 [2024-11-27 09:56:37.885897] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.800 [2024-11-27 09:56:37.886048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.800 [2024-11-27 09:56:37.886167] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.800 [2024-11-27 09:56:37.886231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.800 [2024-11-27 09:56:37.886258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:36.800 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.060 [2024-11-27 09:56:37.949837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:37.060 [2024-11-27 09:56:37.949926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.060 [2024-11-27 09:56:37.949951] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:37.060 [2024-11-27 09:56:37.949960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.060 [2024-11-27 09:56:37.952470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.060 [2024-11-27 09:56:37.952612] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:37.060 [2024-11-27 09:56:37.952711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:37.060 [2024-11-27 09:56:37.952773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:37.060 [2024-11-27 09:56:37.952935] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:37.060 [2024-11-27 09:56:37.952947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:37.060 [2024-11-27 09:56:37.952974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:37.060 [2024-11-27 09:56:37.953083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.060 [2024-11-27 09:56:37.953185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:37.060 [2024-11-27 09:56:37.953196] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:37.060 [2024-11-27 09:56:37.953285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:37.060 [2024-11-27 09:56:37.953403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:37.060 [2024-11-27 09:56:37.953414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:37.060 [2024-11-27 09:56:37.953529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.060 pt1 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.060 "name": "raid_bdev1", 00:19:37.060 "uuid": "49e370eb-2759-46d5-81b2-4a22a939aab4", 00:19:37.060 "strip_size_kb": 0, 00:19:37.060 "state": "online", 00:19:37.060 "raid_level": "raid1", 00:19:37.060 "superblock": true, 00:19:37.060 "num_base_bdevs": 2, 00:19:37.060 "num_base_bdevs_discovered": 1, 00:19:37.060 "num_base_bdevs_operational": 1, 00:19:37.060 "base_bdevs_list": [ 00:19:37.060 { 00:19:37.060 "name": null, 00:19:37.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.060 "is_configured": false, 00:19:37.060 "data_offset": 256, 00:19:37.060 "data_size": 7936 00:19:37.060 }, 00:19:37.060 { 00:19:37.060 "name": "pt2", 00:19:37.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:37.060 "is_configured": true, 00:19:37.060 "data_offset": 256, 00:19:37.060 "data_size": 7936 00:19:37.060 } 00:19:37.060 ] 00:19:37.060 }' 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.060 09:56:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:37.319 [2024-11-27 09:56:38.433289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.319 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 49e370eb-2759-46d5-81b2-4a22a939aab4 '!=' 49e370eb-2759-46d5-81b2-4a22a939aab4 ']' 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87781 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87781 ']' 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87781 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87781 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.579 killing process with pid 87781 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87781' 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87781 00:19:37.579 [2024-11-27 09:56:38.521884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:37.579 [2024-11-27 09:56:38.522026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:37.579 09:56:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87781 00:19:37.579 [2024-11-27 09:56:38.522091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:37.579 [2024-11-27 09:56:38.522114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:37.838 [2024-11-27 09:56:38.771055] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.217 09:56:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:19:39.217 00:19:39.217 real 0m6.445s 00:19:39.217 user 0m9.552s 00:19:39.217 sys 0m1.322s 00:19:39.217 ************************************ 00:19:39.217 END TEST raid_superblock_test_md_separate 00:19:39.217 ************************************ 00:19:39.217 09:56:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.217 09:56:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.217 09:56:40 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:19:39.217 09:56:40 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:19:39.217 09:56:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:39.217 09:56:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.217 09:56:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.217 ************************************ 00:19:39.217 START TEST raid_rebuild_test_sb_md_separate 00:19:39.217 ************************************ 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88110 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88110 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88110 ']' 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.217 09:56:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.217 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:39.217 Zero copy mechanism will not be used. 00:19:39.217 [2024-11-27 09:56:40.189250] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:19:39.217 [2024-11-27 09:56:40.189384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88110 ] 00:19:39.477 [2024-11-27 09:56:40.365614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.477 [2024-11-27 09:56:40.508667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.736 [2024-11-27 09:56:40.728148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.736 [2024-11-27 09:56:40.728216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.996 BaseBdev1_malloc 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:39.996 [2024-11-27 09:56:41.103128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:39.996 [2024-11-27 09:56:41.103263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:39.996 [2024-11-27 09:56:41.103311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:39.996 [2024-11-27 09:56:41.103345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:39.996 [2024-11-27 09:56:41.105727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:39.996 [2024-11-27 09:56:41.105818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:39.996 BaseBdev1 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.996 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 BaseBdev2_malloc 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 [2024-11-27 09:56:41.166980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:40.255 [2024-11-27 09:56:41.167079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.255 [2024-11-27 09:56:41.167105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:40.255 [2024-11-27 09:56:41.167119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.255 [2024-11-27 09:56:41.169417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.255 [2024-11-27 09:56:41.169523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:40.255 BaseBdev2 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 spare_malloc 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 spare_delay 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 [2024-11-27 09:56:41.251974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:40.255 [2024-11-27 09:56:41.252074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.255 [2024-11-27 09:56:41.252105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:40.255 [2024-11-27 09:56:41.252118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.255 [2024-11-27 09:56:41.254528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.255 [2024-11-27 09:56:41.254574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:40.255 spare 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 [2024-11-27 09:56:41.264064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.255 [2024-11-27 09:56:41.266326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.255 [2024-11-27 09:56:41.266575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:40.255 [2024-11-27 09:56:41.266593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:40.255 [2024-11-27 09:56:41.266727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:40.255 [2024-11-27 09:56:41.266870] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:40.255 [2024-11-27 09:56:41.266880] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:40.255 [2024-11-27 09:56:41.267032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.255 "name": "raid_bdev1", 00:19:40.255 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:40.255 "strip_size_kb": 0, 00:19:40.255 "state": "online", 00:19:40.255 "raid_level": "raid1", 00:19:40.255 "superblock": true, 00:19:40.255 "num_base_bdevs": 2, 00:19:40.255 "num_base_bdevs_discovered": 2, 00:19:40.255 "num_base_bdevs_operational": 2, 00:19:40.255 "base_bdevs_list": [ 00:19:40.255 { 00:19:40.255 "name": "BaseBdev1", 00:19:40.255 "uuid": "ba031292-9c18-512b-b01a-3fbed626db61", 00:19:40.255 "is_configured": true, 00:19:40.255 "data_offset": 256, 00:19:40.255 "data_size": 7936 00:19:40.255 }, 00:19:40.255 { 00:19:40.255 "name": "BaseBdev2", 00:19:40.255 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:40.255 "is_configured": true, 00:19:40.255 "data_offset": 256, 00:19:40.255 "data_size": 7936 00:19:40.255 } 00:19:40.255 ] 00:19:40.255 }' 00:19:40.255 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.256 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:40.823 [2024-11-27 09:56:41.715559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:40.823 09:56:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:41.082 [2024-11-27 09:56:42.002846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:41.082 /dev/nbd0 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:41.082 1+0 records in 00:19:41.082 1+0 records out 00:19:41.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451612 s, 9.1 MB/s 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:41.082 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:41.650 7936+0 records in 00:19:41.650 7936+0 records out 00:19:41.650 32505856 bytes (33 MB, 31 MiB) copied, 0.59599 s, 54.5 MB/s 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:41.650 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:41.910 [2024-11-27 09:56:42.879857] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.910 [2024-11-27 09:56:42.922185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.910 "name": "raid_bdev1", 00:19:41.910 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:41.910 "strip_size_kb": 0, 00:19:41.910 "state": "online", 00:19:41.910 "raid_level": "raid1", 00:19:41.910 "superblock": true, 00:19:41.910 "num_base_bdevs": 2, 00:19:41.910 "num_base_bdevs_discovered": 1, 00:19:41.910 "num_base_bdevs_operational": 1, 00:19:41.910 "base_bdevs_list": [ 00:19:41.910 { 00:19:41.910 "name": null, 00:19:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.910 "is_configured": false, 00:19:41.910 "data_offset": 0, 00:19:41.910 "data_size": 7936 00:19:41.910 }, 00:19:41.910 { 00:19:41.910 "name": "BaseBdev2", 00:19:41.910 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:41.910 "is_configured": true, 00:19:41.910 "data_offset": 256, 00:19:41.910 "data_size": 7936 00:19:41.910 } 00:19:41.910 ] 00:19:41.910 }' 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.910 09:56:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.479 09:56:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:42.479 09:56:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.479 09:56:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:42.479 [2024-11-27 09:56:43.341444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:42.479 [2024-11-27 09:56:43.357077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:42.479 09:56:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.479 09:56:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:42.479 [2024-11-27 09:56:43.359464] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:43.417 "name": "raid_bdev1", 00:19:43.417 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:43.417 "strip_size_kb": 0, 00:19:43.417 "state": "online", 00:19:43.417 "raid_level": "raid1", 00:19:43.417 "superblock": true, 00:19:43.417 "num_base_bdevs": 2, 00:19:43.417 "num_base_bdevs_discovered": 2, 00:19:43.417 "num_base_bdevs_operational": 2, 00:19:43.417 "process": { 00:19:43.417 "type": "rebuild", 00:19:43.417 "target": "spare", 00:19:43.417 "progress": { 00:19:43.417 "blocks": 2560, 00:19:43.417 "percent": 32 00:19:43.417 } 00:19:43.417 }, 00:19:43.417 "base_bdevs_list": [ 00:19:43.417 { 00:19:43.417 "name": "spare", 00:19:43.417 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:43.417 "is_configured": true, 00:19:43.417 "data_offset": 256, 00:19:43.417 "data_size": 7936 00:19:43.417 }, 00:19:43.417 { 00:19:43.417 "name": "BaseBdev2", 00:19:43.417 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:43.417 "is_configured": true, 00:19:43.417 "data_offset": 256, 00:19:43.417 "data_size": 7936 00:19:43.417 } 00:19:43.417 ] 00:19:43.417 }' 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.417 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.417 [2024-11-27 09:56:44.507515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.677 [2024-11-27 09:56:44.570555] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:43.677 [2024-11-27 09:56:44.570673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.677 [2024-11-27 09:56:44.570690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:43.677 [2024-11-27 09:56:44.570707] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.677 "name": "raid_bdev1", 00:19:43.677 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:43.677 "strip_size_kb": 0, 00:19:43.677 "state": "online", 00:19:43.677 "raid_level": "raid1", 00:19:43.677 "superblock": true, 00:19:43.677 "num_base_bdevs": 2, 00:19:43.677 "num_base_bdevs_discovered": 1, 00:19:43.677 "num_base_bdevs_operational": 1, 00:19:43.677 "base_bdevs_list": [ 00:19:43.677 { 00:19:43.677 "name": null, 00:19:43.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.677 "is_configured": false, 00:19:43.677 "data_offset": 0, 00:19:43.677 "data_size": 7936 00:19:43.677 }, 00:19:43.677 { 00:19:43.677 "name": "BaseBdev2", 00:19:43.677 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:43.677 "is_configured": true, 00:19:43.677 "data_offset": 256, 00:19:43.677 "data_size": 7936 00:19:43.677 } 00:19:43.677 ] 00:19:43.677 }' 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.677 09:56:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:44.246 "name": "raid_bdev1", 00:19:44.246 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:44.246 "strip_size_kb": 0, 00:19:44.246 "state": "online", 00:19:44.246 "raid_level": "raid1", 00:19:44.246 "superblock": true, 00:19:44.246 "num_base_bdevs": 2, 00:19:44.246 "num_base_bdevs_discovered": 1, 00:19:44.246 "num_base_bdevs_operational": 1, 00:19:44.246 "base_bdevs_list": [ 00:19:44.246 { 00:19:44.246 "name": null, 00:19:44.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.246 "is_configured": false, 00:19:44.246 "data_offset": 0, 00:19:44.246 "data_size": 7936 00:19:44.246 }, 00:19:44.246 { 00:19:44.246 "name": "BaseBdev2", 00:19:44.246 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:44.246 "is_configured": true, 00:19:44.246 "data_offset": 256, 00:19:44.246 "data_size": 7936 00:19:44.246 } 00:19:44.246 ] 00:19:44.246 }' 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:44.246 [2024-11-27 09:56:45.224802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:44.246 [2024-11-27 09:56:45.239847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.246 09:56:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:44.246 [2024-11-27 09:56:45.242278] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.184 "name": "raid_bdev1", 00:19:45.184 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:45.184 "strip_size_kb": 0, 00:19:45.184 "state": "online", 00:19:45.184 "raid_level": "raid1", 00:19:45.184 "superblock": true, 00:19:45.184 "num_base_bdevs": 2, 00:19:45.184 "num_base_bdevs_discovered": 2, 00:19:45.184 "num_base_bdevs_operational": 2, 00:19:45.184 "process": { 00:19:45.184 "type": "rebuild", 00:19:45.184 "target": "spare", 00:19:45.184 "progress": { 00:19:45.184 "blocks": 2560, 00:19:45.184 "percent": 32 00:19:45.184 } 00:19:45.184 }, 00:19:45.184 "base_bdevs_list": [ 00:19:45.184 { 00:19:45.184 "name": "spare", 00:19:45.184 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:45.184 "is_configured": true, 00:19:45.184 "data_offset": 256, 00:19:45.184 "data_size": 7936 00:19:45.184 }, 00:19:45.184 { 00:19:45.184 "name": "BaseBdev2", 00:19:45.184 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:45.184 "is_configured": true, 00:19:45.184 "data_offset": 256, 00:19:45.184 "data_size": 7936 00:19:45.184 } 00:19:45.184 ] 00:19:45.184 }' 00:19:45.184 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:45.444 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=721 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:45.444 "name": "raid_bdev1", 00:19:45.444 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:45.444 "strip_size_kb": 0, 00:19:45.444 "state": "online", 00:19:45.444 "raid_level": "raid1", 00:19:45.444 "superblock": true, 00:19:45.444 "num_base_bdevs": 2, 00:19:45.444 "num_base_bdevs_discovered": 2, 00:19:45.444 "num_base_bdevs_operational": 2, 00:19:45.444 "process": { 00:19:45.444 "type": "rebuild", 00:19:45.444 "target": "spare", 00:19:45.444 "progress": { 00:19:45.444 "blocks": 2816, 00:19:45.444 "percent": 35 00:19:45.444 } 00:19:45.444 }, 00:19:45.444 "base_bdevs_list": [ 00:19:45.444 { 00:19:45.444 "name": "spare", 00:19:45.444 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:45.444 "is_configured": true, 00:19:45.444 "data_offset": 256, 00:19:45.444 "data_size": 7936 00:19:45.444 }, 00:19:45.444 { 00:19:45.444 "name": "BaseBdev2", 00:19:45.444 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:45.444 "is_configured": true, 00:19:45.444 "data_offset": 256, 00:19:45.444 "data_size": 7936 00:19:45.444 } 00:19:45.444 ] 00:19:45.444 }' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:45.444 09:56:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:46.825 "name": "raid_bdev1", 00:19:46.825 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:46.825 "strip_size_kb": 0, 00:19:46.825 "state": "online", 00:19:46.825 "raid_level": "raid1", 00:19:46.825 "superblock": true, 00:19:46.825 "num_base_bdevs": 2, 00:19:46.825 "num_base_bdevs_discovered": 2, 00:19:46.825 "num_base_bdevs_operational": 2, 00:19:46.825 "process": { 00:19:46.825 "type": "rebuild", 00:19:46.825 "target": "spare", 00:19:46.825 "progress": { 00:19:46.825 "blocks": 5888, 00:19:46.825 "percent": 74 00:19:46.825 } 00:19:46.825 }, 00:19:46.825 "base_bdevs_list": [ 00:19:46.825 { 00:19:46.825 "name": "spare", 00:19:46.825 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:46.825 "is_configured": true, 00:19:46.825 "data_offset": 256, 00:19:46.825 "data_size": 7936 00:19:46.825 }, 00:19:46.825 { 00:19:46.825 "name": "BaseBdev2", 00:19:46.825 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:46.825 "is_configured": true, 00:19:46.825 "data_offset": 256, 00:19:46.825 "data_size": 7936 00:19:46.825 } 00:19:46.825 ] 00:19:46.825 }' 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:46.825 09:56:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:47.394 [2024-11-27 09:56:48.370254] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:47.394 [2024-11-27 09:56:48.370491] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:47.394 [2024-11-27 09:56:48.370708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.654 "name": "raid_bdev1", 00:19:47.654 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:47.654 "strip_size_kb": 0, 00:19:47.654 "state": "online", 00:19:47.654 "raid_level": "raid1", 00:19:47.654 "superblock": true, 00:19:47.654 "num_base_bdevs": 2, 00:19:47.654 "num_base_bdevs_discovered": 2, 00:19:47.654 "num_base_bdevs_operational": 2, 00:19:47.654 "base_bdevs_list": [ 00:19:47.654 { 00:19:47.654 "name": "spare", 00:19:47.654 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:47.654 "is_configured": true, 00:19:47.654 "data_offset": 256, 00:19:47.654 "data_size": 7936 00:19:47.654 }, 00:19:47.654 { 00:19:47.654 "name": "BaseBdev2", 00:19:47.654 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:47.654 "is_configured": true, 00:19:47.654 "data_offset": 256, 00:19:47.654 "data_size": 7936 00:19:47.654 } 00:19:47.654 ] 00:19:47.654 }' 00:19:47.654 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.914 "name": "raid_bdev1", 00:19:47.914 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:47.914 "strip_size_kb": 0, 00:19:47.914 "state": "online", 00:19:47.914 "raid_level": "raid1", 00:19:47.914 "superblock": true, 00:19:47.914 "num_base_bdevs": 2, 00:19:47.914 "num_base_bdevs_discovered": 2, 00:19:47.914 "num_base_bdevs_operational": 2, 00:19:47.914 "base_bdevs_list": [ 00:19:47.914 { 00:19:47.914 "name": "spare", 00:19:47.914 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:47.914 "is_configured": true, 00:19:47.914 "data_offset": 256, 00:19:47.914 "data_size": 7936 00:19:47.914 }, 00:19:47.914 { 00:19:47.914 "name": "BaseBdev2", 00:19:47.914 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:47.914 "is_configured": true, 00:19:47.914 "data_offset": 256, 00:19:47.914 "data_size": 7936 00:19:47.914 } 00:19:47.914 ] 00:19:47.914 }' 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:47.914 09:56:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.914 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.915 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.915 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:47.915 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.915 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.175 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:48.175 "name": "raid_bdev1", 00:19:48.175 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:48.175 "strip_size_kb": 0, 00:19:48.175 "state": "online", 00:19:48.175 "raid_level": "raid1", 00:19:48.175 "superblock": true, 00:19:48.175 "num_base_bdevs": 2, 00:19:48.175 "num_base_bdevs_discovered": 2, 00:19:48.175 "num_base_bdevs_operational": 2, 00:19:48.175 "base_bdevs_list": [ 00:19:48.175 { 00:19:48.175 "name": "spare", 00:19:48.175 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:48.175 "is_configured": true, 00:19:48.175 "data_offset": 256, 00:19:48.175 "data_size": 7936 00:19:48.175 }, 00:19:48.175 { 00:19:48.175 "name": "BaseBdev2", 00:19:48.175 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:48.175 "is_configured": true, 00:19:48.175 "data_offset": 256, 00:19:48.175 "data_size": 7936 00:19:48.175 } 00:19:48.175 ] 00:19:48.175 }' 00:19:48.175 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:48.175 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.435 [2024-11-27 09:56:49.464059] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:48.435 [2024-11-27 09:56:49.464180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:48.435 [2024-11-27 09:56:49.464310] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:48.435 [2024-11-27 09:56:49.464398] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:48.435 [2024-11-27 09:56:49.464410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.435 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:48.708 /dev/nbd0 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.708 1+0 records in 00:19:48.708 1+0 records out 00:19:48.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617082 s, 6.6 MB/s 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.708 09:56:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:48.992 /dev/nbd1 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.992 1+0 records in 00:19:48.992 1+0 records out 00:19:48.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494466 s, 8.3 MB/s 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.992 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:48.993 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.251 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.511 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 [2024-11-27 09:56:50.743656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:49.771 [2024-11-27 09:56:50.743763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:49.771 [2024-11-27 09:56:50.743797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:49.771 [2024-11-27 09:56:50.743808] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:49.771 [2024-11-27 09:56:50.746570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:49.771 [2024-11-27 09:56:50.746668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:49.771 [2024-11-27 09:56:50.746803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:49.771 [2024-11-27 09:56:50.746900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:49.771 [2024-11-27 09:56:50.747198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:49.771 spare 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 [2024-11-27 09:56:50.847190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:49.771 [2024-11-27 09:56:50.847387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:49.771 [2024-11-27 09:56:50.847598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:49.771 [2024-11-27 09:56:50.847884] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:49.771 [2024-11-27 09:56:50.847933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:49.771 [2024-11-27 09:56:50.848200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:49.771 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.031 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.031 "name": "raid_bdev1", 00:19:50.031 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:50.031 "strip_size_kb": 0, 00:19:50.031 "state": "online", 00:19:50.031 "raid_level": "raid1", 00:19:50.031 "superblock": true, 00:19:50.031 "num_base_bdevs": 2, 00:19:50.031 "num_base_bdevs_discovered": 2, 00:19:50.031 "num_base_bdevs_operational": 2, 00:19:50.031 "base_bdevs_list": [ 00:19:50.031 { 00:19:50.031 "name": "spare", 00:19:50.031 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:50.031 "is_configured": true, 00:19:50.031 "data_offset": 256, 00:19:50.031 "data_size": 7936 00:19:50.031 }, 00:19:50.031 { 00:19:50.031 "name": "BaseBdev2", 00:19:50.031 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:50.031 "is_configured": true, 00:19:50.031 "data_offset": 256, 00:19:50.031 "data_size": 7936 00:19:50.031 } 00:19:50.031 ] 00:19:50.031 }' 00:19:50.031 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.031 09:56:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.291 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.291 "name": "raid_bdev1", 00:19:50.291 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:50.291 "strip_size_kb": 0, 00:19:50.291 "state": "online", 00:19:50.291 "raid_level": "raid1", 00:19:50.291 "superblock": true, 00:19:50.291 "num_base_bdevs": 2, 00:19:50.291 "num_base_bdevs_discovered": 2, 00:19:50.291 "num_base_bdevs_operational": 2, 00:19:50.291 "base_bdevs_list": [ 00:19:50.291 { 00:19:50.291 "name": "spare", 00:19:50.291 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:50.291 "is_configured": true, 00:19:50.291 "data_offset": 256, 00:19:50.291 "data_size": 7936 00:19:50.292 }, 00:19:50.292 { 00:19:50.292 "name": "BaseBdev2", 00:19:50.292 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:50.292 "is_configured": true, 00:19:50.292 "data_offset": 256, 00:19:50.292 "data_size": 7936 00:19:50.292 } 00:19:50.292 ] 00:19:50.292 }' 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.292 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.551 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:50.551 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.551 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.551 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:50.551 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.551 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.552 [2024-11-27 09:56:51.479204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:50.552 "name": "raid_bdev1", 00:19:50.552 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:50.552 "strip_size_kb": 0, 00:19:50.552 "state": "online", 00:19:50.552 "raid_level": "raid1", 00:19:50.552 "superblock": true, 00:19:50.552 "num_base_bdevs": 2, 00:19:50.552 "num_base_bdevs_discovered": 1, 00:19:50.552 "num_base_bdevs_operational": 1, 00:19:50.552 "base_bdevs_list": [ 00:19:50.552 { 00:19:50.552 "name": null, 00:19:50.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.552 "is_configured": false, 00:19:50.552 "data_offset": 0, 00:19:50.552 "data_size": 7936 00:19:50.552 }, 00:19:50.552 { 00:19:50.552 "name": "BaseBdev2", 00:19:50.552 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:50.552 "is_configured": true, 00:19:50.552 "data_offset": 256, 00:19:50.552 "data_size": 7936 00:19:50.552 } 00:19:50.552 ] 00:19:50.552 }' 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:50.552 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.811 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.811 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.811 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:50.811 [2024-11-27 09:56:51.914477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.811 [2024-11-27 09:56:51.914845] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:50.811 [2024-11-27 09:56:51.914926] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:50.811 [2024-11-27 09:56:51.915016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.811 [2024-11-27 09:56:51.929815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:50.811 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.811 09:56:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:50.811 [2024-11-27 09:56:51.932400] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.193 "name": "raid_bdev1", 00:19:52.193 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:52.193 "strip_size_kb": 0, 00:19:52.193 "state": "online", 00:19:52.193 "raid_level": "raid1", 00:19:52.193 "superblock": true, 00:19:52.193 "num_base_bdevs": 2, 00:19:52.193 "num_base_bdevs_discovered": 2, 00:19:52.193 "num_base_bdevs_operational": 2, 00:19:52.193 "process": { 00:19:52.193 "type": "rebuild", 00:19:52.193 "target": "spare", 00:19:52.193 "progress": { 00:19:52.193 "blocks": 2560, 00:19:52.193 "percent": 32 00:19:52.193 } 00:19:52.193 }, 00:19:52.193 "base_bdevs_list": [ 00:19:52.193 { 00:19:52.193 "name": "spare", 00:19:52.193 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:52.193 "is_configured": true, 00:19:52.193 "data_offset": 256, 00:19:52.193 "data_size": 7936 00:19:52.193 }, 00:19:52.193 { 00:19:52.193 "name": "BaseBdev2", 00:19:52.193 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:52.193 "is_configured": true, 00:19:52.193 "data_offset": 256, 00:19:52.193 "data_size": 7936 00:19:52.193 } 00:19:52.193 ] 00:19:52.193 }' 00:19:52.193 09:56:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.193 [2024-11-27 09:56:53.088392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.193 [2024-11-27 09:56:53.143208] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.193 [2024-11-27 09:56:53.143302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.193 [2024-11-27 09:56:53.143320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.193 [2024-11-27 09:56:53.143353] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.193 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:52.193 "name": "raid_bdev1", 00:19:52.193 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:52.193 "strip_size_kb": 0, 00:19:52.193 "state": "online", 00:19:52.193 "raid_level": "raid1", 00:19:52.193 "superblock": true, 00:19:52.193 "num_base_bdevs": 2, 00:19:52.193 "num_base_bdevs_discovered": 1, 00:19:52.193 "num_base_bdevs_operational": 1, 00:19:52.193 "base_bdevs_list": [ 00:19:52.193 { 00:19:52.193 "name": null, 00:19:52.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.193 "is_configured": false, 00:19:52.194 "data_offset": 0, 00:19:52.194 "data_size": 7936 00:19:52.194 }, 00:19:52.194 { 00:19:52.194 "name": "BaseBdev2", 00:19:52.194 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:52.194 "is_configured": true, 00:19:52.194 "data_offset": 256, 00:19:52.194 "data_size": 7936 00:19:52.194 } 00:19:52.194 ] 00:19:52.194 }' 00:19:52.194 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:52.194 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.763 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:52.763 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.763 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:52.763 [2024-11-27 09:56:53.634160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:52.763 [2024-11-27 09:56:53.634334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:52.763 [2024-11-27 09:56:53.634383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:52.763 [2024-11-27 09:56:53.634419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:52.763 [2024-11-27 09:56:53.634776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:52.763 [2024-11-27 09:56:53.634838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:52.763 [2024-11-27 09:56:53.634943] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:52.763 [2024-11-27 09:56:53.634985] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:52.763 [2024-11-27 09:56:53.635042] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:52.763 [2024-11-27 09:56:53.635114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.763 [2024-11-27 09:56:53.651231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:52.763 spare 00:19:52.763 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.763 09:56:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:52.763 [2024-11-27 09:56:53.653819] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.701 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.701 "name": "raid_bdev1", 00:19:53.701 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:53.701 "strip_size_kb": 0, 00:19:53.701 "state": "online", 00:19:53.701 "raid_level": "raid1", 00:19:53.701 "superblock": true, 00:19:53.701 "num_base_bdevs": 2, 00:19:53.701 "num_base_bdevs_discovered": 2, 00:19:53.701 "num_base_bdevs_operational": 2, 00:19:53.701 "process": { 00:19:53.701 "type": "rebuild", 00:19:53.701 "target": "spare", 00:19:53.701 "progress": { 00:19:53.701 "blocks": 2560, 00:19:53.702 "percent": 32 00:19:53.702 } 00:19:53.702 }, 00:19:53.702 "base_bdevs_list": [ 00:19:53.702 { 00:19:53.702 "name": "spare", 00:19:53.702 "uuid": "3fcda029-3e45-574c-9323-0e958007e21f", 00:19:53.702 "is_configured": true, 00:19:53.702 "data_offset": 256, 00:19:53.702 "data_size": 7936 00:19:53.702 }, 00:19:53.702 { 00:19:53.702 "name": "BaseBdev2", 00:19:53.702 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:53.702 "is_configured": true, 00:19:53.702 "data_offset": 256, 00:19:53.702 "data_size": 7936 00:19:53.702 } 00:19:53.702 ] 00:19:53.702 }' 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.702 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.702 [2024-11-27 09:56:54.801966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.961 [2024-11-27 09:56:54.864926] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:53.961 [2024-11-27 09:56:54.865048] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:53.961 [2024-11-27 09:56:54.865073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:53.962 [2024-11-27 09:56:54.865082] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:53.962 "name": "raid_bdev1", 00:19:53.962 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:53.962 "strip_size_kb": 0, 00:19:53.962 "state": "online", 00:19:53.962 "raid_level": "raid1", 00:19:53.962 "superblock": true, 00:19:53.962 "num_base_bdevs": 2, 00:19:53.962 "num_base_bdevs_discovered": 1, 00:19:53.962 "num_base_bdevs_operational": 1, 00:19:53.962 "base_bdevs_list": [ 00:19:53.962 { 00:19:53.962 "name": null, 00:19:53.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.962 "is_configured": false, 00:19:53.962 "data_offset": 0, 00:19:53.962 "data_size": 7936 00:19:53.962 }, 00:19:53.962 { 00:19:53.962 "name": "BaseBdev2", 00:19:53.962 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:53.962 "is_configured": true, 00:19:53.962 "data_offset": 256, 00:19:53.962 "data_size": 7936 00:19:53.962 } 00:19:53.962 ] 00:19:53.962 }' 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:53.962 09:56:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.222 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.222 "name": "raid_bdev1", 00:19:54.222 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:54.222 "strip_size_kb": 0, 00:19:54.222 "state": "online", 00:19:54.222 "raid_level": "raid1", 00:19:54.222 "superblock": true, 00:19:54.222 "num_base_bdevs": 2, 00:19:54.222 "num_base_bdevs_discovered": 1, 00:19:54.222 "num_base_bdevs_operational": 1, 00:19:54.222 "base_bdevs_list": [ 00:19:54.222 { 00:19:54.222 "name": null, 00:19:54.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.222 "is_configured": false, 00:19:54.223 "data_offset": 0, 00:19:54.223 "data_size": 7936 00:19:54.223 }, 00:19:54.223 { 00:19:54.223 "name": "BaseBdev2", 00:19:54.223 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:54.223 "is_configured": true, 00:19:54.223 "data_offset": 256, 00:19:54.223 "data_size": 7936 00:19:54.223 } 00:19:54.223 ] 00:19:54.223 }' 00:19:54.223 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:54.494 [2024-11-27 09:56:55.455295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:54.494 [2024-11-27 09:56:55.455389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.494 [2024-11-27 09:56:55.455420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:54.494 [2024-11-27 09:56:55.455430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.494 [2024-11-27 09:56:55.455727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.494 [2024-11-27 09:56:55.455738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:54.494 [2024-11-27 09:56:55.455807] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:54.494 [2024-11-27 09:56:55.455822] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:54.494 [2024-11-27 09:56:55.455837] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:54.494 [2024-11-27 09:56:55.455850] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:54.494 BaseBdev1 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.494 09:56:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.431 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.432 "name": "raid_bdev1", 00:19:55.432 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:55.432 "strip_size_kb": 0, 00:19:55.432 "state": "online", 00:19:55.432 "raid_level": "raid1", 00:19:55.432 "superblock": true, 00:19:55.432 "num_base_bdevs": 2, 00:19:55.432 "num_base_bdevs_discovered": 1, 00:19:55.432 "num_base_bdevs_operational": 1, 00:19:55.432 "base_bdevs_list": [ 00:19:55.432 { 00:19:55.432 "name": null, 00:19:55.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.432 "is_configured": false, 00:19:55.432 "data_offset": 0, 00:19:55.432 "data_size": 7936 00:19:55.432 }, 00:19:55.432 { 00:19:55.432 "name": "BaseBdev2", 00:19:55.432 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:55.432 "is_configured": true, 00:19:55.432 "data_offset": 256, 00:19:55.432 "data_size": 7936 00:19:55.432 } 00:19:55.432 ] 00:19:55.432 }' 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.432 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:56.001 "name": "raid_bdev1", 00:19:56.001 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:56.001 "strip_size_kb": 0, 00:19:56.001 "state": "online", 00:19:56.001 "raid_level": "raid1", 00:19:56.001 "superblock": true, 00:19:56.001 "num_base_bdevs": 2, 00:19:56.001 "num_base_bdevs_discovered": 1, 00:19:56.001 "num_base_bdevs_operational": 1, 00:19:56.001 "base_bdevs_list": [ 00:19:56.001 { 00:19:56.001 "name": null, 00:19:56.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.001 "is_configured": false, 00:19:56.001 "data_offset": 0, 00:19:56.001 "data_size": 7936 00:19:56.001 }, 00:19:56.001 { 00:19:56.001 "name": "BaseBdev2", 00:19:56.001 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:56.001 "is_configured": true, 00:19:56.001 "data_offset": 256, 00:19:56.001 "data_size": 7936 00:19:56.001 } 00:19:56.001 ] 00:19:56.001 }' 00:19:56.001 09:56:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:56.001 [2024-11-27 09:56:57.104619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:56.001 [2024-11-27 09:56:57.104891] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:56.001 [2024-11-27 09:56:57.104976] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:56.001 request: 00:19:56.001 { 00:19:56.001 "base_bdev": "BaseBdev1", 00:19:56.001 "raid_bdev": "raid_bdev1", 00:19:56.001 "method": "bdev_raid_add_base_bdev", 00:19:56.001 "req_id": 1 00:19:56.001 } 00:19:56.001 Got JSON-RPC error response 00:19:56.001 response: 00:19:56.001 { 00:19:56.001 "code": -22, 00:19:56.001 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:56.001 } 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:56.001 09:56:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.381 "name": "raid_bdev1", 00:19:57.381 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:57.381 "strip_size_kb": 0, 00:19:57.381 "state": "online", 00:19:57.381 "raid_level": "raid1", 00:19:57.381 "superblock": true, 00:19:57.381 "num_base_bdevs": 2, 00:19:57.381 "num_base_bdevs_discovered": 1, 00:19:57.381 "num_base_bdevs_operational": 1, 00:19:57.381 "base_bdevs_list": [ 00:19:57.381 { 00:19:57.381 "name": null, 00:19:57.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.381 "is_configured": false, 00:19:57.381 "data_offset": 0, 00:19:57.381 "data_size": 7936 00:19:57.381 }, 00:19:57.381 { 00:19:57.381 "name": "BaseBdev2", 00:19:57.381 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:57.381 "is_configured": true, 00:19:57.381 "data_offset": 256, 00:19:57.381 "data_size": 7936 00:19:57.381 } 00:19:57.381 ] 00:19:57.381 }' 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.381 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.641 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:57.641 "name": "raid_bdev1", 00:19:57.641 "uuid": "ecbb4d0f-80c9-425a-b519-bdad9788fae4", 00:19:57.641 "strip_size_kb": 0, 00:19:57.641 "state": "online", 00:19:57.641 "raid_level": "raid1", 00:19:57.642 "superblock": true, 00:19:57.642 "num_base_bdevs": 2, 00:19:57.642 "num_base_bdevs_discovered": 1, 00:19:57.642 "num_base_bdevs_operational": 1, 00:19:57.642 "base_bdevs_list": [ 00:19:57.642 { 00:19:57.642 "name": null, 00:19:57.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.642 "is_configured": false, 00:19:57.642 "data_offset": 0, 00:19:57.642 "data_size": 7936 00:19:57.642 }, 00:19:57.642 { 00:19:57.642 "name": "BaseBdev2", 00:19:57.642 "uuid": "2d2ce5d8-338e-59d3-8eb5-55926e8cdac8", 00:19:57.642 "is_configured": true, 00:19:57.642 "data_offset": 256, 00:19:57.642 "data_size": 7936 00:19:57.642 } 00:19:57.642 ] 00:19:57.642 }' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88110 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88110 ']' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88110 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88110 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.642 killing process with pid 88110 00:19:57.642 Received shutdown signal, test time was about 60.000000 seconds 00:19:57.642 00:19:57.642 Latency(us) 00:19:57.642 [2024-11-27T09:56:58.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.642 [2024-11-27T09:56:58.775Z] =================================================================================================================== 00:19:57.642 [2024-11-27T09:56:58.775Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88110' 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88110 00:19:57.642 [2024-11-27 09:56:58.768189] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:57.642 09:56:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88110 00:19:57.642 [2024-11-27 09:56:58.768369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.642 [2024-11-27 09:56:58.768428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.642 [2024-11-27 09:56:58.768440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:58.211 [2024-11-27 09:56:59.120201] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:59.593 09:57:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:59.593 00:19:59.593 real 0m20.257s 00:19:59.593 user 0m26.228s 00:19:59.593 sys 0m2.881s 00:19:59.593 09:57:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.593 09:57:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:59.593 ************************************ 00:19:59.593 END TEST raid_rebuild_test_sb_md_separate 00:19:59.593 ************************************ 00:19:59.593 09:57:00 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:59.593 09:57:00 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:59.593 09:57:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:59.593 09:57:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.593 09:57:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:59.593 ************************************ 00:19:59.593 START TEST raid_state_function_test_sb_md_interleaved 00:19:59.593 ************************************ 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88803 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:59.593 Process raid pid: 88803 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88803' 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88803 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88803 ']' 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.593 09:57:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:59.593 [2024-11-27 09:57:00.528594] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:19:59.593 [2024-11-27 09:57:00.528737] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.593 [2024-11-27 09:57:00.695101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.854 [2024-11-27 09:57:00.839539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.114 [2024-11-27 09:57:01.085411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.114 [2024-11-27 09:57:01.085498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.373 [2024-11-27 09:57:01.380848] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.373 [2024-11-27 09:57:01.380931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.373 [2024-11-27 09:57:01.380944] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.373 [2024-11-27 09:57:01.380956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.373 "name": "Existed_Raid", 00:20:00.373 "uuid": "7ca99511-609b-47e2-bf9f-1e6ca3bbaf50", 00:20:00.373 "strip_size_kb": 0, 00:20:00.373 "state": "configuring", 00:20:00.373 "raid_level": "raid1", 00:20:00.373 "superblock": true, 00:20:00.373 "num_base_bdevs": 2, 00:20:00.373 "num_base_bdevs_discovered": 0, 00:20:00.373 "num_base_bdevs_operational": 2, 00:20:00.373 "base_bdevs_list": [ 00:20:00.373 { 00:20:00.373 "name": "BaseBdev1", 00:20:00.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.373 "is_configured": false, 00:20:00.373 "data_offset": 0, 00:20:00.373 "data_size": 0 00:20:00.373 }, 00:20:00.373 { 00:20:00.373 "name": "BaseBdev2", 00:20:00.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.373 "is_configured": false, 00:20:00.373 "data_offset": 0, 00:20:00.373 "data_size": 0 00:20:00.373 } 00:20:00.373 ] 00:20:00.373 }' 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.373 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 [2024-11-27 09:57:01.867971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:00.942 [2024-11-27 09:57:01.868040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 [2024-11-27 09:57:01.879942] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:00.942 [2024-11-27 09:57:01.880006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:00.942 [2024-11-27 09:57:01.880033] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:00.942 [2024-11-27 09:57:01.880046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 [2024-11-27 09:57:01.938023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:00.942 BaseBdev1 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 [ 00:20:00.942 { 00:20:00.942 "name": "BaseBdev1", 00:20:00.942 "aliases": [ 00:20:00.942 "94d36678-5ace-4767-b2fc-07ec8ad03115" 00:20:00.942 ], 00:20:00.942 "product_name": "Malloc disk", 00:20:00.942 "block_size": 4128, 00:20:00.942 "num_blocks": 8192, 00:20:00.942 "uuid": "94d36678-5ace-4767-b2fc-07ec8ad03115", 00:20:00.942 "md_size": 32, 00:20:00.942 "md_interleave": true, 00:20:00.942 "dif_type": 0, 00:20:00.942 "assigned_rate_limits": { 00:20:00.942 "rw_ios_per_sec": 0, 00:20:00.942 "rw_mbytes_per_sec": 0, 00:20:00.942 "r_mbytes_per_sec": 0, 00:20:00.942 "w_mbytes_per_sec": 0 00:20:00.942 }, 00:20:00.942 "claimed": true, 00:20:00.942 "claim_type": "exclusive_write", 00:20:00.942 "zoned": false, 00:20:00.942 "supported_io_types": { 00:20:00.942 "read": true, 00:20:00.942 "write": true, 00:20:00.942 "unmap": true, 00:20:00.942 "flush": true, 00:20:00.942 "reset": true, 00:20:00.942 "nvme_admin": false, 00:20:00.942 "nvme_io": false, 00:20:00.942 "nvme_io_md": false, 00:20:00.942 "write_zeroes": true, 00:20:00.942 "zcopy": true, 00:20:00.942 "get_zone_info": false, 00:20:00.942 "zone_management": false, 00:20:00.942 "zone_append": false, 00:20:00.942 "compare": false, 00:20:00.942 "compare_and_write": false, 00:20:00.942 "abort": true, 00:20:00.942 "seek_hole": false, 00:20:00.942 "seek_data": false, 00:20:00.942 "copy": true, 00:20:00.942 "nvme_iov_md": false 00:20:00.942 }, 00:20:00.942 "memory_domains": [ 00:20:00.942 { 00:20:00.942 "dma_device_id": "system", 00:20:00.942 "dma_device_type": 1 00:20:00.942 }, 00:20:00.942 { 00:20:00.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.942 "dma_device_type": 2 00:20:00.942 } 00:20:00.942 ], 00:20:00.942 "driver_specific": {} 00:20:00.942 } 00:20:00.942 ] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:00.942 09:57:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.942 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.942 "name": "Existed_Raid", 00:20:00.942 "uuid": "149cbc30-f0a1-4dcd-92ba-96d8315b07d7", 00:20:00.942 "strip_size_kb": 0, 00:20:00.942 "state": "configuring", 00:20:00.942 "raid_level": "raid1", 00:20:00.942 "superblock": true, 00:20:00.942 "num_base_bdevs": 2, 00:20:00.942 "num_base_bdevs_discovered": 1, 00:20:00.942 "num_base_bdevs_operational": 2, 00:20:00.942 "base_bdevs_list": [ 00:20:00.942 { 00:20:00.942 "name": "BaseBdev1", 00:20:00.942 "uuid": "94d36678-5ace-4767-b2fc-07ec8ad03115", 00:20:00.942 "is_configured": true, 00:20:00.942 "data_offset": 256, 00:20:00.942 "data_size": 7936 00:20:00.942 }, 00:20:00.942 { 00:20:00.942 "name": "BaseBdev2", 00:20:00.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.942 "is_configured": false, 00:20:00.942 "data_offset": 0, 00:20:00.942 "data_size": 0 00:20:00.942 } 00:20:00.942 ] 00:20:00.942 }' 00:20:00.942 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.942 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.512 [2024-11-27 09:57:02.449238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:01.512 [2024-11-27 09:57:02.449314] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.512 [2024-11-27 09:57:02.461300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:01.512 [2024-11-27 09:57:02.463617] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:01.512 [2024-11-27 09:57:02.463673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:01.512 "name": "Existed_Raid", 00:20:01.512 "uuid": "2176ae9a-f5aa-46bd-ac56-a94fbdcef0a8", 00:20:01.512 "strip_size_kb": 0, 00:20:01.512 "state": "configuring", 00:20:01.512 "raid_level": "raid1", 00:20:01.512 "superblock": true, 00:20:01.512 "num_base_bdevs": 2, 00:20:01.512 "num_base_bdevs_discovered": 1, 00:20:01.512 "num_base_bdevs_operational": 2, 00:20:01.512 "base_bdevs_list": [ 00:20:01.512 { 00:20:01.512 "name": "BaseBdev1", 00:20:01.512 "uuid": "94d36678-5ace-4767-b2fc-07ec8ad03115", 00:20:01.512 "is_configured": true, 00:20:01.512 "data_offset": 256, 00:20:01.512 "data_size": 7936 00:20:01.512 }, 00:20:01.512 { 00:20:01.512 "name": "BaseBdev2", 00:20:01.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:01.512 "is_configured": false, 00:20:01.512 "data_offset": 0, 00:20:01.512 "data_size": 0 00:20:01.512 } 00:20:01.512 ] 00:20:01.512 }' 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:01.512 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.104 [2024-11-27 09:57:02.955961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:02.104 [2024-11-27 09:57:02.956232] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:02.104 [2024-11-27 09:57:02.956248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:02.104 [2024-11-27 09:57:02.956340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:02.104 [2024-11-27 09:57:02.956427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:02.104 [2024-11-27 09:57:02.956467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:02.104 [2024-11-27 09:57:02.956542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.104 BaseBdev2 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.104 [ 00:20:02.104 { 00:20:02.104 "name": "BaseBdev2", 00:20:02.104 "aliases": [ 00:20:02.104 "467d5104-a0b8-4c9b-8731-81f715e4e979" 00:20:02.104 ], 00:20:02.104 "product_name": "Malloc disk", 00:20:02.104 "block_size": 4128, 00:20:02.104 "num_blocks": 8192, 00:20:02.104 "uuid": "467d5104-a0b8-4c9b-8731-81f715e4e979", 00:20:02.104 "md_size": 32, 00:20:02.104 "md_interleave": true, 00:20:02.104 "dif_type": 0, 00:20:02.104 "assigned_rate_limits": { 00:20:02.104 "rw_ios_per_sec": 0, 00:20:02.104 "rw_mbytes_per_sec": 0, 00:20:02.104 "r_mbytes_per_sec": 0, 00:20:02.104 "w_mbytes_per_sec": 0 00:20:02.104 }, 00:20:02.104 "claimed": true, 00:20:02.104 "claim_type": "exclusive_write", 00:20:02.104 "zoned": false, 00:20:02.104 "supported_io_types": { 00:20:02.104 "read": true, 00:20:02.104 "write": true, 00:20:02.104 "unmap": true, 00:20:02.104 "flush": true, 00:20:02.104 "reset": true, 00:20:02.104 "nvme_admin": false, 00:20:02.104 "nvme_io": false, 00:20:02.104 "nvme_io_md": false, 00:20:02.104 "write_zeroes": true, 00:20:02.104 "zcopy": true, 00:20:02.104 "get_zone_info": false, 00:20:02.104 "zone_management": false, 00:20:02.104 "zone_append": false, 00:20:02.104 "compare": false, 00:20:02.104 "compare_and_write": false, 00:20:02.104 "abort": true, 00:20:02.104 "seek_hole": false, 00:20:02.104 "seek_data": false, 00:20:02.104 "copy": true, 00:20:02.104 "nvme_iov_md": false 00:20:02.104 }, 00:20:02.104 "memory_domains": [ 00:20:02.104 { 00:20:02.104 "dma_device_id": "system", 00:20:02.104 "dma_device_type": 1 00:20:02.104 }, 00:20:02.104 { 00:20:02.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.104 "dma_device_type": 2 00:20:02.104 } 00:20:02.104 ], 00:20:02.104 "driver_specific": {} 00:20:02.104 } 00:20:02.104 ] 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.104 09:57:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.104 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.104 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.104 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.105 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.105 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.105 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.105 "name": "Existed_Raid", 00:20:02.105 "uuid": "2176ae9a-f5aa-46bd-ac56-a94fbdcef0a8", 00:20:02.105 "strip_size_kb": 0, 00:20:02.105 "state": "online", 00:20:02.105 "raid_level": "raid1", 00:20:02.105 "superblock": true, 00:20:02.105 "num_base_bdevs": 2, 00:20:02.105 "num_base_bdevs_discovered": 2, 00:20:02.105 "num_base_bdevs_operational": 2, 00:20:02.105 "base_bdevs_list": [ 00:20:02.105 { 00:20:02.105 "name": "BaseBdev1", 00:20:02.105 "uuid": "94d36678-5ace-4767-b2fc-07ec8ad03115", 00:20:02.105 "is_configured": true, 00:20:02.105 "data_offset": 256, 00:20:02.105 "data_size": 7936 00:20:02.105 }, 00:20:02.105 { 00:20:02.105 "name": "BaseBdev2", 00:20:02.105 "uuid": "467d5104-a0b8-4c9b-8731-81f715e4e979", 00:20:02.105 "is_configured": true, 00:20:02.105 "data_offset": 256, 00:20:02.105 "data_size": 7936 00:20:02.105 } 00:20:02.105 ] 00:20:02.105 }' 00:20:02.105 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.105 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.391 [2024-11-27 09:57:03.483472] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.391 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:02.391 "name": "Existed_Raid", 00:20:02.391 "aliases": [ 00:20:02.391 "2176ae9a-f5aa-46bd-ac56-a94fbdcef0a8" 00:20:02.391 ], 00:20:02.391 "product_name": "Raid Volume", 00:20:02.391 "block_size": 4128, 00:20:02.391 "num_blocks": 7936, 00:20:02.391 "uuid": "2176ae9a-f5aa-46bd-ac56-a94fbdcef0a8", 00:20:02.391 "md_size": 32, 00:20:02.391 "md_interleave": true, 00:20:02.391 "dif_type": 0, 00:20:02.391 "assigned_rate_limits": { 00:20:02.391 "rw_ios_per_sec": 0, 00:20:02.391 "rw_mbytes_per_sec": 0, 00:20:02.391 "r_mbytes_per_sec": 0, 00:20:02.391 "w_mbytes_per_sec": 0 00:20:02.391 }, 00:20:02.391 "claimed": false, 00:20:02.391 "zoned": false, 00:20:02.391 "supported_io_types": { 00:20:02.391 "read": true, 00:20:02.391 "write": true, 00:20:02.391 "unmap": false, 00:20:02.391 "flush": false, 00:20:02.391 "reset": true, 00:20:02.391 "nvme_admin": false, 00:20:02.391 "nvme_io": false, 00:20:02.391 "nvme_io_md": false, 00:20:02.391 "write_zeroes": true, 00:20:02.391 "zcopy": false, 00:20:02.391 "get_zone_info": false, 00:20:02.391 "zone_management": false, 00:20:02.391 "zone_append": false, 00:20:02.391 "compare": false, 00:20:02.391 "compare_and_write": false, 00:20:02.391 "abort": false, 00:20:02.391 "seek_hole": false, 00:20:02.391 "seek_data": false, 00:20:02.391 "copy": false, 00:20:02.391 "nvme_iov_md": false 00:20:02.391 }, 00:20:02.391 "memory_domains": [ 00:20:02.391 { 00:20:02.391 "dma_device_id": "system", 00:20:02.391 "dma_device_type": 1 00:20:02.391 }, 00:20:02.391 { 00:20:02.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.391 "dma_device_type": 2 00:20:02.391 }, 00:20:02.391 { 00:20:02.391 "dma_device_id": "system", 00:20:02.391 "dma_device_type": 1 00:20:02.391 }, 00:20:02.391 { 00:20:02.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.391 "dma_device_type": 2 00:20:02.391 } 00:20:02.391 ], 00:20:02.391 "driver_specific": { 00:20:02.391 "raid": { 00:20:02.391 "uuid": "2176ae9a-f5aa-46bd-ac56-a94fbdcef0a8", 00:20:02.391 "strip_size_kb": 0, 00:20:02.391 "state": "online", 00:20:02.391 "raid_level": "raid1", 00:20:02.391 "superblock": true, 00:20:02.391 "num_base_bdevs": 2, 00:20:02.391 "num_base_bdevs_discovered": 2, 00:20:02.391 "num_base_bdevs_operational": 2, 00:20:02.391 "base_bdevs_list": [ 00:20:02.391 { 00:20:02.391 "name": "BaseBdev1", 00:20:02.391 "uuid": "94d36678-5ace-4767-b2fc-07ec8ad03115", 00:20:02.391 "is_configured": true, 00:20:02.391 "data_offset": 256, 00:20:02.391 "data_size": 7936 00:20:02.391 }, 00:20:02.391 { 00:20:02.391 "name": "BaseBdev2", 00:20:02.391 "uuid": "467d5104-a0b8-4c9b-8731-81f715e4e979", 00:20:02.391 "is_configured": true, 00:20:02.391 "data_offset": 256, 00:20:02.391 "data_size": 7936 00:20:02.391 } 00:20:02.391 ] 00:20:02.391 } 00:20:02.392 } 00:20:02.392 }' 00:20:02.392 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:02.651 BaseBdev2' 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:02.651 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.652 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.652 [2024-11-27 09:57:03.686867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.912 "name": "Existed_Raid", 00:20:02.912 "uuid": "2176ae9a-f5aa-46bd-ac56-a94fbdcef0a8", 00:20:02.912 "strip_size_kb": 0, 00:20:02.912 "state": "online", 00:20:02.912 "raid_level": "raid1", 00:20:02.912 "superblock": true, 00:20:02.912 "num_base_bdevs": 2, 00:20:02.912 "num_base_bdevs_discovered": 1, 00:20:02.912 "num_base_bdevs_operational": 1, 00:20:02.912 "base_bdevs_list": [ 00:20:02.912 { 00:20:02.912 "name": null, 00:20:02.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.912 "is_configured": false, 00:20:02.912 "data_offset": 0, 00:20:02.912 "data_size": 7936 00:20:02.912 }, 00:20:02.912 { 00:20:02.912 "name": "BaseBdev2", 00:20:02.912 "uuid": "467d5104-a0b8-4c9b-8731-81f715e4e979", 00:20:02.912 "is_configured": true, 00:20:02.912 "data_offset": 256, 00:20:02.912 "data_size": 7936 00:20:02.912 } 00:20:02.912 ] 00:20:02.912 }' 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.912 09:57:03 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.172 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.172 [2024-11-27 09:57:04.263269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:03.172 [2024-11-27 09:57:04.263419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:03.432 [2024-11-27 09:57:04.369161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:03.432 [2024-11-27 09:57:04.369246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:03.432 [2024-11-27 09:57:04.369261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88803 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88803 ']' 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88803 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88803 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.432 killing process with pid 88803 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88803' 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88803 00:20:03.432 [2024-11-27 09:57:04.468571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:03.432 09:57:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88803 00:20:03.432 [2024-11-27 09:57:04.488411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:04.813 09:57:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:04.813 00:20:04.813 real 0m5.306s 00:20:04.813 user 0m7.477s 00:20:04.813 sys 0m1.038s 00:20:04.813 09:57:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.813 09:57:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.813 ************************************ 00:20:04.813 END TEST raid_state_function_test_sb_md_interleaved 00:20:04.813 ************************************ 00:20:04.813 09:57:05 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:04.813 09:57:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:04.813 09:57:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:04.813 09:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:04.813 ************************************ 00:20:04.813 START TEST raid_superblock_test_md_interleaved 00:20:04.813 ************************************ 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89055 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89055 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89055 ']' 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.813 09:57:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:04.813 [2024-11-27 09:57:05.901223] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:04.813 [2024-11-27 09:57:05.901371] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89055 ] 00:20:05.072 [2024-11-27 09:57:06.081448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.331 [2024-11-27 09:57:06.223688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.590 [2024-11-27 09:57:06.468214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.590 [2024-11-27 09:57:06.468277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.850 malloc1 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.850 [2024-11-27 09:57:06.805636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:05.850 [2024-11-27 09:57:06.805743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.850 [2024-11-27 09:57:06.805772] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:05.850 [2024-11-27 09:57:06.805784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.850 [2024-11-27 09:57:06.808106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.850 [2024-11-27 09:57:06.808140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:05.850 pt1 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:05.850 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.851 malloc2 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.851 [2024-11-27 09:57:06.875499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:05.851 [2024-11-27 09:57:06.875583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.851 [2024-11-27 09:57:06.875610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:05.851 [2024-11-27 09:57:06.875619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.851 [2024-11-27 09:57:06.877923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.851 [2024-11-27 09:57:06.877965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:05.851 pt2 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.851 [2024-11-27 09:57:06.887531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:05.851 [2024-11-27 09:57:06.889761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:05.851 [2024-11-27 09:57:06.889977] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:05.851 [2024-11-27 09:57:06.889991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:05.851 [2024-11-27 09:57:06.890107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:05.851 [2024-11-27 09:57:06.890206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:05.851 [2024-11-27 09:57:06.890223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:05.851 [2024-11-27 09:57:06.890321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.851 "name": "raid_bdev1", 00:20:05.851 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:05.851 "strip_size_kb": 0, 00:20:05.851 "state": "online", 00:20:05.851 "raid_level": "raid1", 00:20:05.851 "superblock": true, 00:20:05.851 "num_base_bdevs": 2, 00:20:05.851 "num_base_bdevs_discovered": 2, 00:20:05.851 "num_base_bdevs_operational": 2, 00:20:05.851 "base_bdevs_list": [ 00:20:05.851 { 00:20:05.851 "name": "pt1", 00:20:05.851 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:05.851 "is_configured": true, 00:20:05.851 "data_offset": 256, 00:20:05.851 "data_size": 7936 00:20:05.851 }, 00:20:05.851 { 00:20:05.851 "name": "pt2", 00:20:05.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:05.851 "is_configured": true, 00:20:05.851 "data_offset": 256, 00:20:05.851 "data_size": 7936 00:20:05.851 } 00:20:05.851 ] 00:20:05.851 }' 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.851 09:57:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.421 [2024-11-27 09:57:07.347165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:06.421 "name": "raid_bdev1", 00:20:06.421 "aliases": [ 00:20:06.421 "27185a69-65ba-49a1-b5ed-c2e023f716df" 00:20:06.421 ], 00:20:06.421 "product_name": "Raid Volume", 00:20:06.421 "block_size": 4128, 00:20:06.421 "num_blocks": 7936, 00:20:06.421 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:06.421 "md_size": 32, 00:20:06.421 "md_interleave": true, 00:20:06.421 "dif_type": 0, 00:20:06.421 "assigned_rate_limits": { 00:20:06.421 "rw_ios_per_sec": 0, 00:20:06.421 "rw_mbytes_per_sec": 0, 00:20:06.421 "r_mbytes_per_sec": 0, 00:20:06.421 "w_mbytes_per_sec": 0 00:20:06.421 }, 00:20:06.421 "claimed": false, 00:20:06.421 "zoned": false, 00:20:06.421 "supported_io_types": { 00:20:06.421 "read": true, 00:20:06.421 "write": true, 00:20:06.421 "unmap": false, 00:20:06.421 "flush": false, 00:20:06.421 "reset": true, 00:20:06.421 "nvme_admin": false, 00:20:06.421 "nvme_io": false, 00:20:06.421 "nvme_io_md": false, 00:20:06.421 "write_zeroes": true, 00:20:06.421 "zcopy": false, 00:20:06.421 "get_zone_info": false, 00:20:06.421 "zone_management": false, 00:20:06.421 "zone_append": false, 00:20:06.421 "compare": false, 00:20:06.421 "compare_and_write": false, 00:20:06.421 "abort": false, 00:20:06.421 "seek_hole": false, 00:20:06.421 "seek_data": false, 00:20:06.421 "copy": false, 00:20:06.421 "nvme_iov_md": false 00:20:06.421 }, 00:20:06.421 "memory_domains": [ 00:20:06.421 { 00:20:06.421 "dma_device_id": "system", 00:20:06.421 "dma_device_type": 1 00:20:06.421 }, 00:20:06.421 { 00:20:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.421 "dma_device_type": 2 00:20:06.421 }, 00:20:06.421 { 00:20:06.421 "dma_device_id": "system", 00:20:06.421 "dma_device_type": 1 00:20:06.421 }, 00:20:06.421 { 00:20:06.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:06.421 "dma_device_type": 2 00:20:06.421 } 00:20:06.421 ], 00:20:06.421 "driver_specific": { 00:20:06.421 "raid": { 00:20:06.421 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:06.421 "strip_size_kb": 0, 00:20:06.421 "state": "online", 00:20:06.421 "raid_level": "raid1", 00:20:06.421 "superblock": true, 00:20:06.421 "num_base_bdevs": 2, 00:20:06.421 "num_base_bdevs_discovered": 2, 00:20:06.421 "num_base_bdevs_operational": 2, 00:20:06.421 "base_bdevs_list": [ 00:20:06.421 { 00:20:06.421 "name": "pt1", 00:20:06.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.421 "is_configured": true, 00:20:06.421 "data_offset": 256, 00:20:06.421 "data_size": 7936 00:20:06.421 }, 00:20:06.421 { 00:20:06.421 "name": "pt2", 00:20:06.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.421 "is_configured": true, 00:20:06.421 "data_offset": 256, 00:20:06.421 "data_size": 7936 00:20:06.421 } 00:20:06.421 ] 00:20:06.421 } 00:20:06.421 } 00:20:06.421 }' 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:06.421 pt2' 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.421 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.422 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 [2024-11-27 09:57:07.558741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27185a69-65ba-49a1-b5ed-c2e023f716df 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 27185a69-65ba-49a1-b5ed-c2e023f716df ']' 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 [2024-11-27 09:57:07.602318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.682 [2024-11-27 09:57:07.602411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.682 [2024-11-27 09:57:07.602565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.682 [2024-11-27 09:57:07.602662] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.682 [2024-11-27 09:57:07.602714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.682 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.683 [2024-11-27 09:57:07.746140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:06.683 [2024-11-27 09:57:07.748494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:06.683 [2024-11-27 09:57:07.748643] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:06.683 [2024-11-27 09:57:07.748763] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:06.683 [2024-11-27 09:57:07.748820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.683 [2024-11-27 09:57:07.748855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:06.683 request: 00:20:06.683 { 00:20:06.683 "name": "raid_bdev1", 00:20:06.683 "raid_level": "raid1", 00:20:06.683 "base_bdevs": [ 00:20:06.683 "malloc1", 00:20:06.683 "malloc2" 00:20:06.683 ], 00:20:06.683 "superblock": false, 00:20:06.683 "method": "bdev_raid_create", 00:20:06.683 "req_id": 1 00:20:06.683 } 00:20:06.683 Got JSON-RPC error response 00:20:06.683 response: 00:20:06.683 { 00:20:06.683 "code": -17, 00:20:06.683 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:06.683 } 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.683 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.942 [2024-11-27 09:57:07.813979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:06.942 [2024-11-27 09:57:07.814079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.942 [2024-11-27 09:57:07.814101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:06.942 [2024-11-27 09:57:07.814114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.942 [2024-11-27 09:57:07.816493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.942 [2024-11-27 09:57:07.816622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:06.942 [2024-11-27 09:57:07.816706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:06.943 [2024-11-27 09:57:07.816779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:06.943 pt1 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.943 "name": "raid_bdev1", 00:20:06.943 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:06.943 "strip_size_kb": 0, 00:20:06.943 "state": "configuring", 00:20:06.943 "raid_level": "raid1", 00:20:06.943 "superblock": true, 00:20:06.943 "num_base_bdevs": 2, 00:20:06.943 "num_base_bdevs_discovered": 1, 00:20:06.943 "num_base_bdevs_operational": 2, 00:20:06.943 "base_bdevs_list": [ 00:20:06.943 { 00:20:06.943 "name": "pt1", 00:20:06.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.943 "is_configured": true, 00:20:06.943 "data_offset": 256, 00:20:06.943 "data_size": 7936 00:20:06.943 }, 00:20:06.943 { 00:20:06.943 "name": null, 00:20:06.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.943 "is_configured": false, 00:20:06.943 "data_offset": 256, 00:20:06.943 "data_size": 7936 00:20:06.943 } 00:20:06.943 ] 00:20:06.943 }' 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.943 09:57:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.203 [2024-11-27 09:57:08.253202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:07.203 [2024-11-27 09:57:08.253387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.203 [2024-11-27 09:57:08.253434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:07.203 [2024-11-27 09:57:08.253475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.203 [2024-11-27 09:57:08.253746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.203 [2024-11-27 09:57:08.253805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:07.203 [2024-11-27 09:57:08.253901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:07.203 [2024-11-27 09:57:08.253958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.203 [2024-11-27 09:57:08.254116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:07.203 [2024-11-27 09:57:08.254165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:07.203 [2024-11-27 09:57:08.254285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:07.203 [2024-11-27 09:57:08.254403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:07.203 [2024-11-27 09:57:08.254440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:07.203 [2024-11-27 09:57:08.254556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:07.203 pt2 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:07.203 "name": "raid_bdev1", 00:20:07.203 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:07.203 "strip_size_kb": 0, 00:20:07.203 "state": "online", 00:20:07.203 "raid_level": "raid1", 00:20:07.203 "superblock": true, 00:20:07.203 "num_base_bdevs": 2, 00:20:07.203 "num_base_bdevs_discovered": 2, 00:20:07.203 "num_base_bdevs_operational": 2, 00:20:07.203 "base_bdevs_list": [ 00:20:07.203 { 00:20:07.203 "name": "pt1", 00:20:07.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.203 "is_configured": true, 00:20:07.203 "data_offset": 256, 00:20:07.203 "data_size": 7936 00:20:07.203 }, 00:20:07.203 { 00:20:07.203 "name": "pt2", 00:20:07.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.203 "is_configured": true, 00:20:07.203 "data_offset": 256, 00:20:07.203 "data_size": 7936 00:20:07.203 } 00:20:07.203 ] 00:20:07.203 }' 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:07.203 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.771 [2024-11-27 09:57:08.736715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.771 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:07.771 "name": "raid_bdev1", 00:20:07.771 "aliases": [ 00:20:07.771 "27185a69-65ba-49a1-b5ed-c2e023f716df" 00:20:07.772 ], 00:20:07.772 "product_name": "Raid Volume", 00:20:07.772 "block_size": 4128, 00:20:07.772 "num_blocks": 7936, 00:20:07.772 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:07.772 "md_size": 32, 00:20:07.772 "md_interleave": true, 00:20:07.772 "dif_type": 0, 00:20:07.772 "assigned_rate_limits": { 00:20:07.772 "rw_ios_per_sec": 0, 00:20:07.772 "rw_mbytes_per_sec": 0, 00:20:07.772 "r_mbytes_per_sec": 0, 00:20:07.772 "w_mbytes_per_sec": 0 00:20:07.772 }, 00:20:07.772 "claimed": false, 00:20:07.772 "zoned": false, 00:20:07.772 "supported_io_types": { 00:20:07.772 "read": true, 00:20:07.772 "write": true, 00:20:07.772 "unmap": false, 00:20:07.772 "flush": false, 00:20:07.772 "reset": true, 00:20:07.772 "nvme_admin": false, 00:20:07.772 "nvme_io": false, 00:20:07.772 "nvme_io_md": false, 00:20:07.772 "write_zeroes": true, 00:20:07.772 "zcopy": false, 00:20:07.772 "get_zone_info": false, 00:20:07.772 "zone_management": false, 00:20:07.772 "zone_append": false, 00:20:07.772 "compare": false, 00:20:07.772 "compare_and_write": false, 00:20:07.772 "abort": false, 00:20:07.772 "seek_hole": false, 00:20:07.772 "seek_data": false, 00:20:07.772 "copy": false, 00:20:07.772 "nvme_iov_md": false 00:20:07.772 }, 00:20:07.772 "memory_domains": [ 00:20:07.772 { 00:20:07.772 "dma_device_id": "system", 00:20:07.772 "dma_device_type": 1 00:20:07.772 }, 00:20:07.772 { 00:20:07.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.772 "dma_device_type": 2 00:20:07.772 }, 00:20:07.772 { 00:20:07.772 "dma_device_id": "system", 00:20:07.772 "dma_device_type": 1 00:20:07.772 }, 00:20:07.772 { 00:20:07.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.772 "dma_device_type": 2 00:20:07.772 } 00:20:07.772 ], 00:20:07.772 "driver_specific": { 00:20:07.772 "raid": { 00:20:07.772 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:07.772 "strip_size_kb": 0, 00:20:07.772 "state": "online", 00:20:07.772 "raid_level": "raid1", 00:20:07.772 "superblock": true, 00:20:07.772 "num_base_bdevs": 2, 00:20:07.772 "num_base_bdevs_discovered": 2, 00:20:07.772 "num_base_bdevs_operational": 2, 00:20:07.772 "base_bdevs_list": [ 00:20:07.772 { 00:20:07.772 "name": "pt1", 00:20:07.772 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.772 "is_configured": true, 00:20:07.772 "data_offset": 256, 00:20:07.772 "data_size": 7936 00:20:07.772 }, 00:20:07.772 { 00:20:07.772 "name": "pt2", 00:20:07.772 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.772 "is_configured": true, 00:20:07.772 "data_offset": 256, 00:20:07.772 "data_size": 7936 00:20:07.772 } 00:20:07.772 ] 00:20:07.772 } 00:20:07.772 } 00:20:07.772 }' 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:07.772 pt2' 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:07.772 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:08.031 [2024-11-27 09:57:08.972337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:08.031 09:57:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 27185a69-65ba-49a1-b5ed-c2e023f716df '!=' 27185a69-65ba-49a1-b5ed-c2e023f716df ']' 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.031 [2024-11-27 09:57:09.020001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.031 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.032 "name": "raid_bdev1", 00:20:08.032 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:08.032 "strip_size_kb": 0, 00:20:08.032 "state": "online", 00:20:08.032 "raid_level": "raid1", 00:20:08.032 "superblock": true, 00:20:08.032 "num_base_bdevs": 2, 00:20:08.032 "num_base_bdevs_discovered": 1, 00:20:08.032 "num_base_bdevs_operational": 1, 00:20:08.032 "base_bdevs_list": [ 00:20:08.032 { 00:20:08.032 "name": null, 00:20:08.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.032 "is_configured": false, 00:20:08.032 "data_offset": 0, 00:20:08.032 "data_size": 7936 00:20:08.032 }, 00:20:08.032 { 00:20:08.032 "name": "pt2", 00:20:08.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.032 "is_configured": true, 00:20:08.032 "data_offset": 256, 00:20:08.032 "data_size": 7936 00:20:08.032 } 00:20:08.032 ] 00:20:08.032 }' 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.032 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 [2024-11-27 09:57:09.483147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:08.601 [2024-11-27 09:57:09.483273] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:08.601 [2024-11-27 09:57:09.483409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:08.601 [2024-11-27 09:57:09.483487] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:08.601 [2024-11-27 09:57:09.483535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.601 [2024-11-27 09:57:09.543036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.601 [2024-11-27 09:57:09.543117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.601 [2024-11-27 09:57:09.543138] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:08.601 [2024-11-27 09:57:09.543150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.601 [2024-11-27 09:57:09.545514] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.601 [2024-11-27 09:57:09.545558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.601 [2024-11-27 09:57:09.545630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:08.601 [2024-11-27 09:57:09.545709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.601 [2024-11-27 09:57:09.545788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:08.601 [2024-11-27 09:57:09.545801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:08.601 [2024-11-27 09:57:09.545904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:08.601 [2024-11-27 09:57:09.545975] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:08.601 [2024-11-27 09:57:09.545982] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:08.601 [2024-11-27 09:57:09.546068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.601 pt2 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:08.601 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.602 "name": "raid_bdev1", 00:20:08.602 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:08.602 "strip_size_kb": 0, 00:20:08.602 "state": "online", 00:20:08.602 "raid_level": "raid1", 00:20:08.602 "superblock": true, 00:20:08.602 "num_base_bdevs": 2, 00:20:08.602 "num_base_bdevs_discovered": 1, 00:20:08.602 "num_base_bdevs_operational": 1, 00:20:08.602 "base_bdevs_list": [ 00:20:08.602 { 00:20:08.602 "name": null, 00:20:08.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.602 "is_configured": false, 00:20:08.602 "data_offset": 256, 00:20:08.602 "data_size": 7936 00:20:08.602 }, 00:20:08.602 { 00:20:08.602 "name": "pt2", 00:20:08.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.602 "is_configured": true, 00:20:08.602 "data_offset": 256, 00:20:08.602 "data_size": 7936 00:20:08.602 } 00:20:08.602 ] 00:20:08.602 }' 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.602 09:57:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.172 [2024-11-27 09:57:10.010182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.172 [2024-11-27 09:57:10.010298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:09.172 [2024-11-27 09:57:10.010425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.172 [2024-11-27 09:57:10.010521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.172 [2024-11-27 09:57:10.010575] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.172 [2024-11-27 09:57:10.062163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:09.172 [2024-11-27 09:57:10.062328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.172 [2024-11-27 09:57:10.062370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:09.172 [2024-11-27 09:57:10.062421] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.172 [2024-11-27 09:57:10.064805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.172 [2024-11-27 09:57:10.064890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:09.172 [2024-11-27 09:57:10.064991] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:09.172 [2024-11-27 09:57:10.065092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:09.172 [2024-11-27 09:57:10.065248] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:09.172 [2024-11-27 09:57:10.065301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:09.172 [2024-11-27 09:57:10.065342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:09.172 [2024-11-27 09:57:10.065444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:09.172 [2024-11-27 09:57:10.065543] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:09.172 [2024-11-27 09:57:10.065552] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:09.172 [2024-11-27 09:57:10.065638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:09.172 [2024-11-27 09:57:10.065702] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:09.172 [2024-11-27 09:57:10.065712] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:09.172 [2024-11-27 09:57:10.065786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.172 pt1 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.172 "name": "raid_bdev1", 00:20:09.172 "uuid": "27185a69-65ba-49a1-b5ed-c2e023f716df", 00:20:09.172 "strip_size_kb": 0, 00:20:09.172 "state": "online", 00:20:09.172 "raid_level": "raid1", 00:20:09.172 "superblock": true, 00:20:09.172 "num_base_bdevs": 2, 00:20:09.172 "num_base_bdevs_discovered": 1, 00:20:09.172 "num_base_bdevs_operational": 1, 00:20:09.172 "base_bdevs_list": [ 00:20:09.172 { 00:20:09.172 "name": null, 00:20:09.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.172 "is_configured": false, 00:20:09.172 "data_offset": 256, 00:20:09.172 "data_size": 7936 00:20:09.172 }, 00:20:09.172 { 00:20:09.172 "name": "pt2", 00:20:09.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.172 "is_configured": true, 00:20:09.172 "data_offset": 256, 00:20:09.172 "data_size": 7936 00:20:09.172 } 00:20:09.172 ] 00:20:09.172 }' 00:20:09.172 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.173 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.432 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:09.432 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:09.432 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.432 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.432 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:09.692 [2024-11-27 09:57:10.585485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 27185a69-65ba-49a1-b5ed-c2e023f716df '!=' 27185a69-65ba-49a1-b5ed-c2e023f716df ']' 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89055 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89055 ']' 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89055 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89055 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89055' 00:20:09.692 killing process with pid 89055 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89055 00:20:09.692 [2024-11-27 09:57:10.656565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:09.692 [2024-11-27 09:57:10.656702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:09.692 09:57:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89055 00:20:09.692 [2024-11-27 09:57:10.656767] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:09.692 [2024-11-27 09:57:10.656786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:09.952 [2024-11-27 09:57:10.883650] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:11.335 09:57:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:11.335 00:20:11.335 real 0m6.316s 00:20:11.335 user 0m9.313s 00:20:11.335 sys 0m1.304s 00:20:11.335 09:57:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:11.335 09:57:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.335 ************************************ 00:20:11.335 END TEST raid_superblock_test_md_interleaved 00:20:11.335 ************************************ 00:20:11.335 09:57:12 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:11.335 09:57:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:11.335 09:57:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:11.335 09:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:11.335 ************************************ 00:20:11.335 START TEST raid_rebuild_test_sb_md_interleaved 00:20:11.335 ************************************ 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89382 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89382 00:20:11.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89382 ']' 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.335 09:57:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:11.335 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:11.335 Zero copy mechanism will not be used. 00:20:11.335 [2024-11-27 09:57:12.306845] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:11.335 [2024-11-27 09:57:12.306991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89382 ] 00:20:11.595 [2024-11-27 09:57:12.471949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.595 [2024-11-27 09:57:12.614429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.855 [2024-11-27 09:57:12.860773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:11.855 [2024-11-27 09:57:12.860870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.114 BaseBdev1_malloc 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.114 [2024-11-27 09:57:13.212570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:12.114 [2024-11-27 09:57:13.212675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.114 [2024-11-27 09:57:13.212703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:12.114 [2024-11-27 09:57:13.212716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.114 [2024-11-27 09:57:13.215051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.114 [2024-11-27 09:57:13.215094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:12.114 BaseBdev1 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.114 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 BaseBdev2_malloc 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 [2024-11-27 09:57:13.272593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:12.375 [2024-11-27 09:57:13.272818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.375 [2024-11-27 09:57:13.272851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:12.375 [2024-11-27 09:57:13.272867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.375 [2024-11-27 09:57:13.275239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.375 [2024-11-27 09:57:13.275284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:12.375 BaseBdev2 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 spare_malloc 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 spare_delay 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 [2024-11-27 09:57:13.362790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:12.375 [2024-11-27 09:57:13.362895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.375 [2024-11-27 09:57:13.362931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:12.375 [2024-11-27 09:57:13.362946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.375 [2024-11-27 09:57:13.365490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.375 [2024-11-27 09:57:13.365651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:12.375 spare 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 [2024-11-27 09:57:13.374806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.375 [2024-11-27 09:57:13.377160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:12.375 [2024-11-27 09:57:13.377396] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:12.375 [2024-11-27 09:57:13.377413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:12.375 [2024-11-27 09:57:13.377529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:12.375 [2024-11-27 09:57:13.377609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:12.375 [2024-11-27 09:57:13.377617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:12.375 [2024-11-27 09:57:13.377712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.375 "name": "raid_bdev1", 00:20:12.375 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:12.375 "strip_size_kb": 0, 00:20:12.375 "state": "online", 00:20:12.375 "raid_level": "raid1", 00:20:12.375 "superblock": true, 00:20:12.375 "num_base_bdevs": 2, 00:20:12.375 "num_base_bdevs_discovered": 2, 00:20:12.375 "num_base_bdevs_operational": 2, 00:20:12.375 "base_bdevs_list": [ 00:20:12.375 { 00:20:12.375 "name": "BaseBdev1", 00:20:12.375 "uuid": "2fb65473-f865-5b79-9868-d9148a1350d7", 00:20:12.375 "is_configured": true, 00:20:12.375 "data_offset": 256, 00:20:12.375 "data_size": 7936 00:20:12.375 }, 00:20:12.375 { 00:20:12.375 "name": "BaseBdev2", 00:20:12.375 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:12.375 "is_configured": true, 00:20:12.375 "data_offset": 256, 00:20:12.375 "data_size": 7936 00:20:12.375 } 00:20:12.375 ] 00:20:12.375 }' 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.375 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.945 [2024-11-27 09:57:13.814381] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.945 [2024-11-27 09:57:13.901909] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.945 "name": "raid_bdev1", 00:20:12.945 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:12.945 "strip_size_kb": 0, 00:20:12.945 "state": "online", 00:20:12.945 "raid_level": "raid1", 00:20:12.945 "superblock": true, 00:20:12.945 "num_base_bdevs": 2, 00:20:12.945 "num_base_bdevs_discovered": 1, 00:20:12.945 "num_base_bdevs_operational": 1, 00:20:12.945 "base_bdevs_list": [ 00:20:12.945 { 00:20:12.945 "name": null, 00:20:12.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.945 "is_configured": false, 00:20:12.945 "data_offset": 0, 00:20:12.945 "data_size": 7936 00:20:12.945 }, 00:20:12.945 { 00:20:12.945 "name": "BaseBdev2", 00:20:12.945 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:12.945 "is_configured": true, 00:20:12.945 "data_offset": 256, 00:20:12.945 "data_size": 7936 00:20:12.945 } 00:20:12.945 ] 00:20:12.945 }' 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.945 09:57:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.514 09:57:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.514 09:57:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.514 09:57:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:13.514 [2024-11-27 09:57:14.369138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.514 [2024-11-27 09:57:14.388014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:13.514 09:57:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.514 09:57:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:13.514 [2024-11-27 09:57:14.390528] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.474 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.474 "name": "raid_bdev1", 00:20:14.474 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:14.474 "strip_size_kb": 0, 00:20:14.474 "state": "online", 00:20:14.474 "raid_level": "raid1", 00:20:14.474 "superblock": true, 00:20:14.474 "num_base_bdevs": 2, 00:20:14.474 "num_base_bdevs_discovered": 2, 00:20:14.474 "num_base_bdevs_operational": 2, 00:20:14.474 "process": { 00:20:14.474 "type": "rebuild", 00:20:14.474 "target": "spare", 00:20:14.474 "progress": { 00:20:14.475 "blocks": 2560, 00:20:14.475 "percent": 32 00:20:14.475 } 00:20:14.475 }, 00:20:14.475 "base_bdevs_list": [ 00:20:14.475 { 00:20:14.475 "name": "spare", 00:20:14.475 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:14.475 "is_configured": true, 00:20:14.475 "data_offset": 256, 00:20:14.475 "data_size": 7936 00:20:14.475 }, 00:20:14.475 { 00:20:14.475 "name": "BaseBdev2", 00:20:14.475 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:14.475 "is_configured": true, 00:20:14.475 "data_offset": 256, 00:20:14.475 "data_size": 7936 00:20:14.475 } 00:20:14.475 ] 00:20:14.475 }' 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.475 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.475 [2024-11-27 09:57:15.529448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:14.475 [2024-11-27 09:57:15.601323] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:14.475 [2024-11-27 09:57:15.601444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.475 [2024-11-27 09:57:15.601462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:14.475 [2024-11-27 09:57:15.601477] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:14.734 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.735 "name": "raid_bdev1", 00:20:14.735 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:14.735 "strip_size_kb": 0, 00:20:14.735 "state": "online", 00:20:14.735 "raid_level": "raid1", 00:20:14.735 "superblock": true, 00:20:14.735 "num_base_bdevs": 2, 00:20:14.735 "num_base_bdevs_discovered": 1, 00:20:14.735 "num_base_bdevs_operational": 1, 00:20:14.735 "base_bdevs_list": [ 00:20:14.735 { 00:20:14.735 "name": null, 00:20:14.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.735 "is_configured": false, 00:20:14.735 "data_offset": 0, 00:20:14.735 "data_size": 7936 00:20:14.735 }, 00:20:14.735 { 00:20:14.735 "name": "BaseBdev2", 00:20:14.735 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:14.735 "is_configured": true, 00:20:14.735 "data_offset": 256, 00:20:14.735 "data_size": 7936 00:20:14.735 } 00:20:14.735 ] 00:20:14.735 }' 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.735 09:57:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.994 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:14.994 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:14.995 "name": "raid_bdev1", 00:20:14.995 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:14.995 "strip_size_kb": 0, 00:20:14.995 "state": "online", 00:20:14.995 "raid_level": "raid1", 00:20:14.995 "superblock": true, 00:20:14.995 "num_base_bdevs": 2, 00:20:14.995 "num_base_bdevs_discovered": 1, 00:20:14.995 "num_base_bdevs_operational": 1, 00:20:14.995 "base_bdevs_list": [ 00:20:14.995 { 00:20:14.995 "name": null, 00:20:14.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.995 "is_configured": false, 00:20:14.995 "data_offset": 0, 00:20:14.995 "data_size": 7936 00:20:14.995 }, 00:20:14.995 { 00:20:14.995 "name": "BaseBdev2", 00:20:14.995 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:14.995 "is_configured": true, 00:20:14.995 "data_offset": 256, 00:20:14.995 "data_size": 7936 00:20:14.995 } 00:20:14.995 ] 00:20:14.995 }' 00:20:14.995 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:15.254 [2024-11-27 09:57:16.204363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:15.254 [2024-11-27 09:57:16.222523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.254 09:57:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:15.254 [2024-11-27 09:57:16.224820] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.193 "name": "raid_bdev1", 00:20:16.193 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:16.193 "strip_size_kb": 0, 00:20:16.193 "state": "online", 00:20:16.193 "raid_level": "raid1", 00:20:16.193 "superblock": true, 00:20:16.193 "num_base_bdevs": 2, 00:20:16.193 "num_base_bdevs_discovered": 2, 00:20:16.193 "num_base_bdevs_operational": 2, 00:20:16.193 "process": { 00:20:16.193 "type": "rebuild", 00:20:16.193 "target": "spare", 00:20:16.193 "progress": { 00:20:16.193 "blocks": 2560, 00:20:16.193 "percent": 32 00:20:16.193 } 00:20:16.193 }, 00:20:16.193 "base_bdevs_list": [ 00:20:16.193 { 00:20:16.193 "name": "spare", 00:20:16.193 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:16.193 "is_configured": true, 00:20:16.193 "data_offset": 256, 00:20:16.193 "data_size": 7936 00:20:16.193 }, 00:20:16.193 { 00:20:16.193 "name": "BaseBdev2", 00:20:16.193 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:16.193 "is_configured": true, 00:20:16.193 "data_offset": 256, 00:20:16.193 "data_size": 7936 00:20:16.193 } 00:20:16.193 ] 00:20:16.193 }' 00:20:16.193 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:16.453 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=752 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:16.453 "name": "raid_bdev1", 00:20:16.453 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:16.453 "strip_size_kb": 0, 00:20:16.453 "state": "online", 00:20:16.453 "raid_level": "raid1", 00:20:16.453 "superblock": true, 00:20:16.453 "num_base_bdevs": 2, 00:20:16.453 "num_base_bdevs_discovered": 2, 00:20:16.453 "num_base_bdevs_operational": 2, 00:20:16.453 "process": { 00:20:16.453 "type": "rebuild", 00:20:16.453 "target": "spare", 00:20:16.453 "progress": { 00:20:16.453 "blocks": 2816, 00:20:16.453 "percent": 35 00:20:16.453 } 00:20:16.453 }, 00:20:16.453 "base_bdevs_list": [ 00:20:16.453 { 00:20:16.453 "name": "spare", 00:20:16.453 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:16.453 "is_configured": true, 00:20:16.453 "data_offset": 256, 00:20:16.453 "data_size": 7936 00:20:16.453 }, 00:20:16.453 { 00:20:16.453 "name": "BaseBdev2", 00:20:16.453 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:16.453 "is_configured": true, 00:20:16.453 "data_offset": 256, 00:20:16.453 "data_size": 7936 00:20:16.453 } 00:20:16.453 ] 00:20:16.453 }' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:16.453 09:57:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:17.395 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.656 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:17.656 "name": "raid_bdev1", 00:20:17.656 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:17.656 "strip_size_kb": 0, 00:20:17.656 "state": "online", 00:20:17.656 "raid_level": "raid1", 00:20:17.656 "superblock": true, 00:20:17.656 "num_base_bdevs": 2, 00:20:17.656 "num_base_bdevs_discovered": 2, 00:20:17.656 "num_base_bdevs_operational": 2, 00:20:17.656 "process": { 00:20:17.656 "type": "rebuild", 00:20:17.656 "target": "spare", 00:20:17.656 "progress": { 00:20:17.656 "blocks": 5632, 00:20:17.656 "percent": 70 00:20:17.656 } 00:20:17.656 }, 00:20:17.656 "base_bdevs_list": [ 00:20:17.656 { 00:20:17.656 "name": "spare", 00:20:17.656 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:17.656 "is_configured": true, 00:20:17.656 "data_offset": 256, 00:20:17.656 "data_size": 7936 00:20:17.656 }, 00:20:17.656 { 00:20:17.656 "name": "BaseBdev2", 00:20:17.656 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:17.656 "is_configured": true, 00:20:17.656 "data_offset": 256, 00:20:17.656 "data_size": 7936 00:20:17.656 } 00:20:17.656 ] 00:20:17.656 }' 00:20:17.656 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:17.656 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.656 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:17.656 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.656 09:57:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:18.225 [2024-11-27 09:57:19.348956] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:18.225 [2024-11-27 09:57:19.349145] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:18.225 [2024-11-27 09:57:19.349279] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.485 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.746 "name": "raid_bdev1", 00:20:18.746 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:18.746 "strip_size_kb": 0, 00:20:18.746 "state": "online", 00:20:18.746 "raid_level": "raid1", 00:20:18.746 "superblock": true, 00:20:18.746 "num_base_bdevs": 2, 00:20:18.746 "num_base_bdevs_discovered": 2, 00:20:18.746 "num_base_bdevs_operational": 2, 00:20:18.746 "base_bdevs_list": [ 00:20:18.746 { 00:20:18.746 "name": "spare", 00:20:18.746 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:18.746 "is_configured": true, 00:20:18.746 "data_offset": 256, 00:20:18.746 "data_size": 7936 00:20:18.746 }, 00:20:18.746 { 00:20:18.746 "name": "BaseBdev2", 00:20:18.746 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:18.746 "is_configured": true, 00:20:18.746 "data_offset": 256, 00:20:18.746 "data_size": 7936 00:20:18.746 } 00:20:18.746 ] 00:20:18.746 }' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:18.746 "name": "raid_bdev1", 00:20:18.746 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:18.746 "strip_size_kb": 0, 00:20:18.746 "state": "online", 00:20:18.746 "raid_level": "raid1", 00:20:18.746 "superblock": true, 00:20:18.746 "num_base_bdevs": 2, 00:20:18.746 "num_base_bdevs_discovered": 2, 00:20:18.746 "num_base_bdevs_operational": 2, 00:20:18.746 "base_bdevs_list": [ 00:20:18.746 { 00:20:18.746 "name": "spare", 00:20:18.746 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:18.746 "is_configured": true, 00:20:18.746 "data_offset": 256, 00:20:18.746 "data_size": 7936 00:20:18.746 }, 00:20:18.746 { 00:20:18.746 "name": "BaseBdev2", 00:20:18.746 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:18.746 "is_configured": true, 00:20:18.746 "data_offset": 256, 00:20:18.746 "data_size": 7936 00:20:18.746 } 00:20:18.746 ] 00:20:18.746 }' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.746 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.747 "name": "raid_bdev1", 00:20:18.747 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:18.747 "strip_size_kb": 0, 00:20:18.747 "state": "online", 00:20:18.747 "raid_level": "raid1", 00:20:18.747 "superblock": true, 00:20:18.747 "num_base_bdevs": 2, 00:20:18.747 "num_base_bdevs_discovered": 2, 00:20:18.747 "num_base_bdevs_operational": 2, 00:20:18.747 "base_bdevs_list": [ 00:20:18.747 { 00:20:18.747 "name": "spare", 00:20:18.747 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:18.747 "is_configured": true, 00:20:18.747 "data_offset": 256, 00:20:18.747 "data_size": 7936 00:20:18.747 }, 00:20:18.747 { 00:20:18.747 "name": "BaseBdev2", 00:20:18.747 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:18.747 "is_configured": true, 00:20:18.747 "data_offset": 256, 00:20:18.747 "data_size": 7936 00:20:18.747 } 00:20:18.747 ] 00:20:18.747 }' 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.747 09:57:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.316 [2024-11-27 09:57:20.262787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.316 [2024-11-27 09:57:20.262831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.316 [2024-11-27 09:57:20.262964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.316 [2024-11-27 09:57:20.263051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.316 [2024-11-27 09:57:20.263070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.316 [2024-11-27 09:57:20.322656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:19.316 [2024-11-27 09:57:20.322783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:19.316 [2024-11-27 09:57:20.322831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:19.316 [2024-11-27 09:57:20.322863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:19.316 [2024-11-27 09:57:20.325459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:19.316 [2024-11-27 09:57:20.325545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:19.316 [2024-11-27 09:57:20.325650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:19.316 [2024-11-27 09:57:20.325755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:19.316 [2024-11-27 09:57:20.325956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:19.316 spare 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.316 [2024-11-27 09:57:20.425941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:19.316 [2024-11-27 09:57:20.426051] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:19.316 [2024-11-27 09:57:20.426242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:19.316 [2024-11-27 09:57:20.426398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:19.316 [2024-11-27 09:57:20.426440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:19.316 [2024-11-27 09:57:20.426614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.316 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.576 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.576 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:19.576 "name": "raid_bdev1", 00:20:19.576 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:19.576 "strip_size_kb": 0, 00:20:19.576 "state": "online", 00:20:19.576 "raid_level": "raid1", 00:20:19.576 "superblock": true, 00:20:19.576 "num_base_bdevs": 2, 00:20:19.576 "num_base_bdevs_discovered": 2, 00:20:19.576 "num_base_bdevs_operational": 2, 00:20:19.576 "base_bdevs_list": [ 00:20:19.576 { 00:20:19.576 "name": "spare", 00:20:19.576 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:19.576 "is_configured": true, 00:20:19.576 "data_offset": 256, 00:20:19.576 "data_size": 7936 00:20:19.576 }, 00:20:19.576 { 00:20:19.576 "name": "BaseBdev2", 00:20:19.576 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:19.576 "is_configured": true, 00:20:19.576 "data_offset": 256, 00:20:19.576 "data_size": 7936 00:20:19.576 } 00:20:19.576 ] 00:20:19.576 }' 00:20:19.576 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:19.576 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:19.836 "name": "raid_bdev1", 00:20:19.836 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:19.836 "strip_size_kb": 0, 00:20:19.836 "state": "online", 00:20:19.836 "raid_level": "raid1", 00:20:19.836 "superblock": true, 00:20:19.836 "num_base_bdevs": 2, 00:20:19.836 "num_base_bdevs_discovered": 2, 00:20:19.836 "num_base_bdevs_operational": 2, 00:20:19.836 "base_bdevs_list": [ 00:20:19.836 { 00:20:19.836 "name": "spare", 00:20:19.836 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:19.836 "is_configured": true, 00:20:19.836 "data_offset": 256, 00:20:19.836 "data_size": 7936 00:20:19.836 }, 00:20:19.836 { 00:20:19.836 "name": "BaseBdev2", 00:20:19.836 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:19.836 "is_configured": true, 00:20:19.836 "data_offset": 256, 00:20:19.836 "data_size": 7936 00:20:19.836 } 00:20:19.836 ] 00:20:19.836 }' 00:20:19.836 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:20.096 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:20.096 09:57:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.096 [2024-11-27 09:57:21.089572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:20.096 "name": "raid_bdev1", 00:20:20.096 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:20.096 "strip_size_kb": 0, 00:20:20.096 "state": "online", 00:20:20.096 "raid_level": "raid1", 00:20:20.096 "superblock": true, 00:20:20.096 "num_base_bdevs": 2, 00:20:20.096 "num_base_bdevs_discovered": 1, 00:20:20.096 "num_base_bdevs_operational": 1, 00:20:20.096 "base_bdevs_list": [ 00:20:20.096 { 00:20:20.096 "name": null, 00:20:20.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.096 "is_configured": false, 00:20:20.096 "data_offset": 0, 00:20:20.096 "data_size": 7936 00:20:20.096 }, 00:20:20.096 { 00:20:20.096 "name": "BaseBdev2", 00:20:20.096 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:20.096 "is_configured": true, 00:20:20.096 "data_offset": 256, 00:20:20.096 "data_size": 7936 00:20:20.096 } 00:20:20.096 ] 00:20:20.096 }' 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:20.096 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.665 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:20.665 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.665 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:20.665 [2024-11-27 09:57:21.540817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.665 [2024-11-27 09:57:21.541130] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:20.665 [2024-11-27 09:57:21.541199] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:20.665 [2024-11-27 09:57:21.541289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:20.665 [2024-11-27 09:57:21.559065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:20.665 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.665 09:57:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:20.665 [2024-11-27 09:57:21.561244] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:21.605 "name": "raid_bdev1", 00:20:21.605 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:21.605 "strip_size_kb": 0, 00:20:21.605 "state": "online", 00:20:21.605 "raid_level": "raid1", 00:20:21.605 "superblock": true, 00:20:21.605 "num_base_bdevs": 2, 00:20:21.605 "num_base_bdevs_discovered": 2, 00:20:21.605 "num_base_bdevs_operational": 2, 00:20:21.605 "process": { 00:20:21.605 "type": "rebuild", 00:20:21.605 "target": "spare", 00:20:21.605 "progress": { 00:20:21.605 "blocks": 2560, 00:20:21.605 "percent": 32 00:20:21.605 } 00:20:21.605 }, 00:20:21.605 "base_bdevs_list": [ 00:20:21.605 { 00:20:21.605 "name": "spare", 00:20:21.605 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:21.605 "is_configured": true, 00:20:21.605 "data_offset": 256, 00:20:21.605 "data_size": 7936 00:20:21.605 }, 00:20:21.605 { 00:20:21.605 "name": "BaseBdev2", 00:20:21.605 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:21.605 "is_configured": true, 00:20:21.605 "data_offset": 256, 00:20:21.605 "data_size": 7936 00:20:21.605 } 00:20:21.605 ] 00:20:21.605 }' 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.605 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.605 [2024-11-27 09:57:22.725045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.865 [2024-11-27 09:57:22.770739] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:21.866 [2024-11-27 09:57:22.770878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.866 [2024-11-27 09:57:22.770917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:21.866 [2024-11-27 09:57:22.770941] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.866 "name": "raid_bdev1", 00:20:21.866 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:21.866 "strip_size_kb": 0, 00:20:21.866 "state": "online", 00:20:21.866 "raid_level": "raid1", 00:20:21.866 "superblock": true, 00:20:21.866 "num_base_bdevs": 2, 00:20:21.866 "num_base_bdevs_discovered": 1, 00:20:21.866 "num_base_bdevs_operational": 1, 00:20:21.866 "base_bdevs_list": [ 00:20:21.866 { 00:20:21.866 "name": null, 00:20:21.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:21.866 "is_configured": false, 00:20:21.866 "data_offset": 0, 00:20:21.866 "data_size": 7936 00:20:21.866 }, 00:20:21.866 { 00:20:21.866 "name": "BaseBdev2", 00:20:21.866 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:21.866 "is_configured": true, 00:20:21.866 "data_offset": 256, 00:20:21.866 "data_size": 7936 00:20:21.866 } 00:20:21.866 ] 00:20:21.866 }' 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.866 09:57:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.126 09:57:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:22.126 09:57:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.126 09:57:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:22.126 [2024-11-27 09:57:23.256801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:22.126 [2024-11-27 09:57:23.256952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.126 [2024-11-27 09:57:23.257014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:22.126 [2024-11-27 09:57:23.257076] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.126 [2024-11-27 09:57:23.257360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.126 [2024-11-27 09:57:23.257417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:22.386 [2024-11-27 09:57:23.257518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:22.386 [2024-11-27 09:57:23.257539] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:22.386 [2024-11-27 09:57:23.257551] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:22.386 [2024-11-27 09:57:23.257580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:22.386 spare 00:20:22.386 [2024-11-27 09:57:23.276228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:22.386 09:57:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.386 09:57:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:22.386 [2024-11-27 09:57:23.278537] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:23.324 "name": "raid_bdev1", 00:20:23.324 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:23.324 "strip_size_kb": 0, 00:20:23.324 "state": "online", 00:20:23.324 "raid_level": "raid1", 00:20:23.324 "superblock": true, 00:20:23.324 "num_base_bdevs": 2, 00:20:23.324 "num_base_bdevs_discovered": 2, 00:20:23.324 "num_base_bdevs_operational": 2, 00:20:23.324 "process": { 00:20:23.324 "type": "rebuild", 00:20:23.324 "target": "spare", 00:20:23.324 "progress": { 00:20:23.324 "blocks": 2560, 00:20:23.324 "percent": 32 00:20:23.324 } 00:20:23.324 }, 00:20:23.324 "base_bdevs_list": [ 00:20:23.324 { 00:20:23.324 "name": "spare", 00:20:23.324 "uuid": "cab1a2a9-8cee-5a19-bc77-409a83e42baa", 00:20:23.324 "is_configured": true, 00:20:23.324 "data_offset": 256, 00:20:23.324 "data_size": 7936 00:20:23.324 }, 00:20:23.324 { 00:20:23.324 "name": "BaseBdev2", 00:20:23.324 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:23.324 "is_configured": true, 00:20:23.324 "data_offset": 256, 00:20:23.324 "data_size": 7936 00:20:23.324 } 00:20:23.324 ] 00:20:23.324 }' 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:23.324 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:23.325 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:23.325 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.325 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.325 [2024-11-27 09:57:24.437211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.585 [2024-11-27 09:57:24.489022] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:23.585 [2024-11-27 09:57:24.489165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.585 [2024-11-27 09:57:24.489190] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:23.585 [2024-11-27 09:57:24.489199] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.585 "name": "raid_bdev1", 00:20:23.585 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:23.585 "strip_size_kb": 0, 00:20:23.585 "state": "online", 00:20:23.585 "raid_level": "raid1", 00:20:23.585 "superblock": true, 00:20:23.585 "num_base_bdevs": 2, 00:20:23.585 "num_base_bdevs_discovered": 1, 00:20:23.585 "num_base_bdevs_operational": 1, 00:20:23.585 "base_bdevs_list": [ 00:20:23.585 { 00:20:23.585 "name": null, 00:20:23.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.585 "is_configured": false, 00:20:23.585 "data_offset": 0, 00:20:23.585 "data_size": 7936 00:20:23.585 }, 00:20:23.585 { 00:20:23.585 "name": "BaseBdev2", 00:20:23.585 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:23.585 "is_configured": true, 00:20:23.585 "data_offset": 256, 00:20:23.585 "data_size": 7936 00:20:23.585 } 00:20:23.585 ] 00:20:23.585 }' 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.585 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.845 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.105 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.105 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:24.105 "name": "raid_bdev1", 00:20:24.105 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:24.105 "strip_size_kb": 0, 00:20:24.105 "state": "online", 00:20:24.105 "raid_level": "raid1", 00:20:24.105 "superblock": true, 00:20:24.105 "num_base_bdevs": 2, 00:20:24.105 "num_base_bdevs_discovered": 1, 00:20:24.105 "num_base_bdevs_operational": 1, 00:20:24.105 "base_bdevs_list": [ 00:20:24.105 { 00:20:24.105 "name": null, 00:20:24.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.105 "is_configured": false, 00:20:24.105 "data_offset": 0, 00:20:24.105 "data_size": 7936 00:20:24.105 }, 00:20:24.105 { 00:20:24.105 "name": "BaseBdev2", 00:20:24.105 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:24.105 "is_configured": true, 00:20:24.105 "data_offset": 256, 00:20:24.105 "data_size": 7936 00:20:24.105 } 00:20:24.105 ] 00:20:24.105 }' 00:20:24.105 09:57:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:24.105 [2024-11-27 09:57:25.118940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:24.105 [2024-11-27 09:57:25.119094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.105 [2024-11-27 09:57:25.119137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:24.105 [2024-11-27 09:57:25.119170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.105 [2024-11-27 09:57:25.119406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.105 [2024-11-27 09:57:25.119454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:24.105 [2024-11-27 09:57:25.119539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:24.105 [2024-11-27 09:57:25.119576] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:24.105 [2024-11-27 09:57:25.119615] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:24.105 [2024-11-27 09:57:25.119652] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:24.105 BaseBdev1 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.105 09:57:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.042 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.043 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.043 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.043 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.043 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.043 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.302 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.302 "name": "raid_bdev1", 00:20:25.302 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:25.302 "strip_size_kb": 0, 00:20:25.302 "state": "online", 00:20:25.302 "raid_level": "raid1", 00:20:25.302 "superblock": true, 00:20:25.302 "num_base_bdevs": 2, 00:20:25.302 "num_base_bdevs_discovered": 1, 00:20:25.302 "num_base_bdevs_operational": 1, 00:20:25.302 "base_bdevs_list": [ 00:20:25.302 { 00:20:25.302 "name": null, 00:20:25.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.302 "is_configured": false, 00:20:25.302 "data_offset": 0, 00:20:25.302 "data_size": 7936 00:20:25.302 }, 00:20:25.302 { 00:20:25.302 "name": "BaseBdev2", 00:20:25.302 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:25.302 "is_configured": true, 00:20:25.302 "data_offset": 256, 00:20:25.302 "data_size": 7936 00:20:25.302 } 00:20:25.302 ] 00:20:25.302 }' 00:20:25.302 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.302 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.561 "name": "raid_bdev1", 00:20:25.561 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:25.561 "strip_size_kb": 0, 00:20:25.561 "state": "online", 00:20:25.561 "raid_level": "raid1", 00:20:25.561 "superblock": true, 00:20:25.561 "num_base_bdevs": 2, 00:20:25.561 "num_base_bdevs_discovered": 1, 00:20:25.561 "num_base_bdevs_operational": 1, 00:20:25.561 "base_bdevs_list": [ 00:20:25.561 { 00:20:25.561 "name": null, 00:20:25.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.561 "is_configured": false, 00:20:25.561 "data_offset": 0, 00:20:25.561 "data_size": 7936 00:20:25.561 }, 00:20:25.561 { 00:20:25.561 "name": "BaseBdev2", 00:20:25.561 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:25.561 "is_configured": true, 00:20:25.561 "data_offset": 256, 00:20:25.561 "data_size": 7936 00:20:25.561 } 00:20:25.561 ] 00:20:25.561 }' 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:25.561 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:25.562 [2024-11-27 09:57:26.636490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:25.562 [2024-11-27 09:57:26.636771] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:25.562 [2024-11-27 09:57:26.636844] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:25.562 request: 00:20:25.562 { 00:20:25.562 "base_bdev": "BaseBdev1", 00:20:25.562 "raid_bdev": "raid_bdev1", 00:20:25.562 "method": "bdev_raid_add_base_bdev", 00:20:25.562 "req_id": 1 00:20:25.562 } 00:20:25.562 Got JSON-RPC error response 00:20:25.562 response: 00:20:25.562 { 00:20:25.562 "code": -22, 00:20:25.562 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:25.562 } 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:25.562 09:57:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:26.941 "name": "raid_bdev1", 00:20:26.941 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:26.941 "strip_size_kb": 0, 00:20:26.941 "state": "online", 00:20:26.941 "raid_level": "raid1", 00:20:26.941 "superblock": true, 00:20:26.941 "num_base_bdevs": 2, 00:20:26.941 "num_base_bdevs_discovered": 1, 00:20:26.941 "num_base_bdevs_operational": 1, 00:20:26.941 "base_bdevs_list": [ 00:20:26.941 { 00:20:26.941 "name": null, 00:20:26.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.941 "is_configured": false, 00:20:26.941 "data_offset": 0, 00:20:26.941 "data_size": 7936 00:20:26.941 }, 00:20:26.941 { 00:20:26.941 "name": "BaseBdev2", 00:20:26.941 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:26.941 "is_configured": true, 00:20:26.941 "data_offset": 256, 00:20:26.941 "data_size": 7936 00:20:26.941 } 00:20:26.941 ] 00:20:26.941 }' 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:26.941 09:57:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.201 "name": "raid_bdev1", 00:20:27.201 "uuid": "90a0f626-3577-4d32-a3e9-e42a9e378266", 00:20:27.201 "strip_size_kb": 0, 00:20:27.201 "state": "online", 00:20:27.201 "raid_level": "raid1", 00:20:27.201 "superblock": true, 00:20:27.201 "num_base_bdevs": 2, 00:20:27.201 "num_base_bdevs_discovered": 1, 00:20:27.201 "num_base_bdevs_operational": 1, 00:20:27.201 "base_bdevs_list": [ 00:20:27.201 { 00:20:27.201 "name": null, 00:20:27.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.201 "is_configured": false, 00:20:27.201 "data_offset": 0, 00:20:27.201 "data_size": 7936 00:20:27.201 }, 00:20:27.201 { 00:20:27.201 "name": "BaseBdev2", 00:20:27.201 "uuid": "26efc76d-a44b-5a44-bac5-80692c6bd8da", 00:20:27.201 "is_configured": true, 00:20:27.201 "data_offset": 256, 00:20:27.201 "data_size": 7936 00:20:27.201 } 00:20:27.201 ] 00:20:27.201 }' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89382 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89382 ']' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89382 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89382 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:27.201 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89382' 00:20:27.201 killing process with pid 89382 00:20:27.201 Received shutdown signal, test time was about 60.000000 seconds 00:20:27.201 00:20:27.201 Latency(us) 00:20:27.202 [2024-11-27T09:57:28.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.202 [2024-11-27T09:57:28.335Z] =================================================================================================================== 00:20:27.202 [2024-11-27T09:57:28.335Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.202 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89382 00:20:27.202 [2024-11-27 09:57:28.266236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.202 09:57:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89382 00:20:27.202 [2024-11-27 09:57:28.266384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.202 [2024-11-27 09:57:28.266440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.202 [2024-11-27 09:57:28.266452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:27.461 [2024-11-27 09:57:28.587659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:28.843 09:57:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:20:28.843 00:20:28.843 real 0m17.575s 00:20:28.843 user 0m22.726s 00:20:28.843 sys 0m1.857s 00:20:28.843 ************************************ 00:20:28.843 END TEST raid_rebuild_test_sb_md_interleaved 00:20:28.843 ************************************ 00:20:28.843 09:57:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.843 09:57:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:28.843 09:57:29 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:20:28.843 09:57:29 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:20:28.843 09:57:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89382 ']' 00:20:28.843 09:57:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89382 00:20:28.843 09:57:29 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:20:28.843 ************************************ 00:20:28.843 00:20:28.843 real 12m14.141s 00:20:28.843 user 16m14.616s 00:20:28.843 sys 2m4.975s 00:20:28.843 09:57:29 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.843 09:57:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:28.843 END TEST bdev_raid 00:20:28.843 ************************************ 00:20:28.843 09:57:29 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:28.843 09:57:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.843 09:57:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.843 09:57:29 -- common/autotest_common.sh@10 -- # set +x 00:20:28.843 ************************************ 00:20:28.843 START TEST spdkcli_raid 00:20:28.843 ************************************ 00:20:28.843 09:57:29 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:29.103 * Looking for test storage... 00:20:29.103 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.103 09:57:30 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.103 --rc genhtml_branch_coverage=1 00:20:29.103 --rc genhtml_function_coverage=1 00:20:29.103 --rc genhtml_legend=1 00:20:29.103 --rc geninfo_all_blocks=1 00:20:29.103 --rc geninfo_unexecuted_blocks=1 00:20:29.103 00:20:29.103 ' 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.103 --rc genhtml_branch_coverage=1 00:20:29.103 --rc genhtml_function_coverage=1 00:20:29.103 --rc genhtml_legend=1 00:20:29.103 --rc geninfo_all_blocks=1 00:20:29.103 --rc geninfo_unexecuted_blocks=1 00:20:29.103 00:20:29.103 ' 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.103 --rc genhtml_branch_coverage=1 00:20:29.103 --rc genhtml_function_coverage=1 00:20:29.103 --rc genhtml_legend=1 00:20:29.103 --rc geninfo_all_blocks=1 00:20:29.103 --rc geninfo_unexecuted_blocks=1 00:20:29.103 00:20:29.103 ' 00:20:29.103 09:57:30 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:29.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.103 --rc genhtml_branch_coverage=1 00:20:29.103 --rc genhtml_function_coverage=1 00:20:29.103 --rc genhtml_legend=1 00:20:29.103 --rc geninfo_all_blocks=1 00:20:29.103 --rc geninfo_unexecuted_blocks=1 00:20:29.103 00:20:29.103 ' 00:20:29.103 09:57:30 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:29.103 09:57:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:29.103 09:57:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:29.103 09:57:30 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:29.103 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:29.104 09:57:30 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90060 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:20:29.104 09:57:30 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90060 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90060 ']' 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.104 09:57:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.363 [2024-11-27 09:57:30.318990] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:29.364 [2024-11-27 09:57:30.319771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90060 ] 00:20:29.364 [2024-11-27 09:57:30.481157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:29.625 [2024-11-27 09:57:30.620415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.625 [2024-11-27 09:57:30.620459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.605 09:57:31 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.605 09:57:31 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:20:30.605 09:57:31 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:20:30.605 09:57:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.605 09:57:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 09:57:31 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:20:30.605 09:57:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:30.605 09:57:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:30.605 09:57:31 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:20:30.605 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:20:30.605 ' 00:20:32.515 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:20:32.515 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:20:32.515 09:57:33 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:20:32.515 09:57:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:32.515 09:57:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.515 09:57:33 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:20:32.515 09:57:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:32.515 09:57:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:32.515 09:57:33 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:20:32.515 ' 00:20:33.454 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:20:33.454 09:57:34 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:20:33.454 09:57:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:33.454 09:57:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.714 09:57:34 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:20:33.714 09:57:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.714 09:57:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.714 09:57:34 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:20:33.714 09:57:34 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:20:34.283 09:57:35 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:20:34.283 09:57:35 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:20:34.283 09:57:35 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:20:34.283 09:57:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.284 09:57:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.284 09:57:35 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:20:34.284 09:57:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.284 09:57:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:34.284 09:57:35 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:20:34.284 ' 00:20:35.223 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:20:35.223 09:57:36 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:20:35.223 09:57:36 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.223 09:57:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 09:57:36 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:20:35.482 09:57:36 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.482 09:57:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:35.482 09:57:36 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:20:35.482 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:20:35.482 ' 00:20:36.862 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:20:36.862 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:20:36.862 09:57:37 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:36.862 09:57:37 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90060 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90060 ']' 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90060 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90060 00:20:36.862 killing process with pid 90060 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90060' 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90060 00:20:36.862 09:57:37 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90060 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90060 ']' 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90060 00:20:39.406 Process with pid 90060 is not found 00:20:39.406 09:57:40 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90060 ']' 00:20:39.406 09:57:40 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90060 00:20:39.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90060) - No such process 00:20:39.406 09:57:40 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90060 is not found' 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:20:39.406 09:57:40 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:20:39.406 ************************************ 00:20:39.406 END TEST spdkcli_raid 00:20:39.406 ************************************ 00:20:39.406 00:20:39.406 real 0m10.430s 00:20:39.406 user 0m21.303s 00:20:39.406 sys 0m1.346s 00:20:39.406 09:57:40 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:39.406 09:57:40 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.406 09:57:40 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:39.406 09:57:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:39.406 09:57:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:39.406 09:57:40 -- common/autotest_common.sh@10 -- # set +x 00:20:39.406 ************************************ 00:20:39.406 START TEST blockdev_raid5f 00:20:39.406 ************************************ 00:20:39.406 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:20:39.665 * Looking for test storage... 00:20:39.665 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:20:39.665 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:39.665 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:20:39.665 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:39.665 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:20:39.665 09:57:40 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.666 09:57:40 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:20:39.666 09:57:40 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.666 09:57:40 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.666 09:57:40 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.666 09:57:40 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.666 --rc genhtml_branch_coverage=1 00:20:39.666 --rc genhtml_function_coverage=1 00:20:39.666 --rc genhtml_legend=1 00:20:39.666 --rc geninfo_all_blocks=1 00:20:39.666 --rc geninfo_unexecuted_blocks=1 00:20:39.666 00:20:39.666 ' 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.666 --rc genhtml_branch_coverage=1 00:20:39.666 --rc genhtml_function_coverage=1 00:20:39.666 --rc genhtml_legend=1 00:20:39.666 --rc geninfo_all_blocks=1 00:20:39.666 --rc geninfo_unexecuted_blocks=1 00:20:39.666 00:20:39.666 ' 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.666 --rc genhtml_branch_coverage=1 00:20:39.666 --rc genhtml_function_coverage=1 00:20:39.666 --rc genhtml_legend=1 00:20:39.666 --rc geninfo_all_blocks=1 00:20:39.666 --rc geninfo_unexecuted_blocks=1 00:20:39.666 00:20:39.666 ' 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:39.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.666 --rc genhtml_branch_coverage=1 00:20:39.666 --rc genhtml_function_coverage=1 00:20:39.666 --rc genhtml_legend=1 00:20:39.666 --rc geninfo_all_blocks=1 00:20:39.666 --rc geninfo_unexecuted_blocks=1 00:20:39.666 00:20:39.666 ' 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90342 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:20:39.666 09:57:40 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90342 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90342 ']' 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.666 09:57:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:39.666 [2024-11-27 09:57:40.782203] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:39.666 [2024-11-27 09:57:40.782808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90342 ] 00:20:39.926 [2024-11-27 09:57:40.957118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.185 [2024-11-27 09:57:41.065109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.756 09:57:41 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.756 09:57:41 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:20:40.756 09:57:41 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:20:40.756 09:57:41 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:20:40.756 09:57:41 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:20:40.756 09:57:41 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.756 09:57:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.015 Malloc0 00:20:41.015 Malloc1 00:20:41.015 Malloc2 00:20:41.015 09:57:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.015 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:20:41.015 09:57:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.015 09:57:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.015 09:57:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.016 09:57:42 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:20:41.016 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e66392c4-ee96-4ae3-bfef-2ed2b8bfa940"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e66392c4-ee96-4ae3-bfef-2ed2b8bfa940",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e66392c4-ee96-4ae3-bfef-2ed2b8bfa940",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "51d8878a-cf66-47ed-bebe-ed8ae40f448a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "47f629a9-18ab-47f9-9254-9e5d0e03277b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d7459c5e-779b-4400-8922-31534e1722d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:41.276 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:20:41.276 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:20:41.276 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:20:41.276 09:57:42 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90342 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90342 ']' 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90342 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90342 00:20:41.276 killing process with pid 90342 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90342' 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90342 00:20:41.276 09:57:42 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90342 00:20:43.815 09:57:44 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:43.815 09:57:44 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:43.815 09:57:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:43.815 09:57:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.815 09:57:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:43.815 ************************************ 00:20:43.815 START TEST bdev_hello_world 00:20:43.815 ************************************ 00:20:43.815 09:57:44 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:20:43.815 [2024-11-27 09:57:44.836723] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:43.815 [2024-11-27 09:57:44.836841] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90409 ] 00:20:44.074 [2024-11-27 09:57:45.008067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.074 [2024-11-27 09:57:45.119384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.642 [2024-11-27 09:57:45.618375] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:20:44.642 [2024-11-27 09:57:45.618437] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:20:44.642 [2024-11-27 09:57:45.618453] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:20:44.642 [2024-11-27 09:57:45.618916] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:20:44.642 [2024-11-27 09:57:45.619051] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:20:44.642 [2024-11-27 09:57:45.619067] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:20:44.642 [2024-11-27 09:57:45.619113] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:20:44.642 00:20:44.642 [2024-11-27 09:57:45.619130] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:20:46.021 00:20:46.021 real 0m2.182s 00:20:46.021 user 0m1.822s 00:20:46.021 sys 0m0.239s 00:20:46.021 ************************************ 00:20:46.021 END TEST bdev_hello_world 00:20:46.021 ************************************ 00:20:46.021 09:57:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.021 09:57:46 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:20:46.021 09:57:46 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:20:46.021 09:57:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.021 09:57:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.021 09:57:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:46.021 ************************************ 00:20:46.021 START TEST bdev_bounds 00:20:46.021 ************************************ 00:20:46.021 09:57:46 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90450 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:20:46.021 Process bdevio pid: 90450 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90450' 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90450 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90450 ']' 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.021 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:46.021 [2024-11-27 09:57:47.093500] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:46.021 [2024-11-27 09:57:47.093634] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90450 ] 00:20:46.281 [2024-11-27 09:57:47.272227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:46.281 [2024-11-27 09:57:47.383127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.281 [2024-11-27 09:57:47.383287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.281 [2024-11-27 09:57:47.383330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.849 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.849 09:57:47 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:20:46.849 09:57:47 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:20:47.119 I/O targets: 00:20:47.119 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:20:47.119 00:20:47.119 00:20:47.119 CUnit - A unit testing framework for C - Version 2.1-3 00:20:47.119 http://cunit.sourceforge.net/ 00:20:47.119 00:20:47.119 00:20:47.119 Suite: bdevio tests on: raid5f 00:20:47.119 Test: blockdev write read block ...passed 00:20:47.119 Test: blockdev write zeroes read block ...passed 00:20:47.119 Test: blockdev write zeroes read no split ...passed 00:20:47.119 Test: blockdev write zeroes read split ...passed 00:20:47.412 Test: blockdev write zeroes read split partial ...passed 00:20:47.412 Test: blockdev reset ...passed 00:20:47.412 Test: blockdev write read 8 blocks ...passed 00:20:47.412 Test: blockdev write read size > 128k ...passed 00:20:47.412 Test: blockdev write read invalid size ...passed 00:20:47.412 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:47.412 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:47.412 Test: blockdev write read max offset ...passed 00:20:47.412 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:47.412 Test: blockdev writev readv 8 blocks ...passed 00:20:47.412 Test: blockdev writev readv 30 x 1block ...passed 00:20:47.412 Test: blockdev writev readv block ...passed 00:20:47.412 Test: blockdev writev readv size > 128k ...passed 00:20:47.412 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:47.412 Test: blockdev comparev and writev ...passed 00:20:47.412 Test: blockdev nvme passthru rw ...passed 00:20:47.412 Test: blockdev nvme passthru vendor specific ...passed 00:20:47.412 Test: blockdev nvme admin passthru ...passed 00:20:47.412 Test: blockdev copy ...passed 00:20:47.412 00:20:47.412 Run Summary: Type Total Ran Passed Failed Inactive 00:20:47.412 suites 1 1 n/a 0 0 00:20:47.412 tests 23 23 23 0 0 00:20:47.412 asserts 130 130 130 0 n/a 00:20:47.412 00:20:47.412 Elapsed time = 0.646 seconds 00:20:47.412 0 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90450 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90450 ']' 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90450 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90450 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90450' 00:20:47.412 killing process with pid 90450 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90450 00:20:47.412 09:57:48 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90450 00:20:48.808 09:57:49 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:48.808 00:20:48.808 real 0m2.703s 00:20:48.808 user 0m6.684s 00:20:48.808 sys 0m0.387s 00:20:48.808 09:57:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.808 09:57:49 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:48.808 ************************************ 00:20:48.808 END TEST bdev_bounds 00:20:48.808 ************************************ 00:20:48.808 09:57:49 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:48.808 09:57:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:48.808 09:57:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.808 09:57:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:48.808 ************************************ 00:20:48.808 START TEST bdev_nbd 00:20:48.808 ************************************ 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90511 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90511 /var/tmp/spdk-nbd.sock 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90511 ']' 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:48.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.808 09:57:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:48.808 [2024-11-27 09:57:49.877933] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:20:48.808 [2024-11-27 09:57:49.878152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.067 [2024-11-27 09:57:50.053714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.067 [2024-11-27 09:57:50.163752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:49.634 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:49.893 1+0 records in 00:20:49.893 1+0 records out 00:20:49.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0005957 s, 6.9 MB/s 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:49.893 09:57:50 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:50.150 { 00:20:50.150 "nbd_device": "/dev/nbd0", 00:20:50.150 "bdev_name": "raid5f" 00:20:50.150 } 00:20:50.150 ]' 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:50.150 { 00:20:50.150 "nbd_device": "/dev/nbd0", 00:20:50.150 "bdev_name": "raid5f" 00:20:50.150 } 00:20:50.150 ]' 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:50.150 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.408 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.667 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:50.925 /dev/nbd0 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:50.925 1+0 records in 00:20:50.925 1+0 records out 00:20:50.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433077 s, 9.5 MB/s 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.925 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:50.926 09:57:51 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:51.184 { 00:20:51.184 "nbd_device": "/dev/nbd0", 00:20:51.184 "bdev_name": "raid5f" 00:20:51.184 } 00:20:51.184 ]' 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:51.184 { 00:20:51.184 "nbd_device": "/dev/nbd0", 00:20:51.184 "bdev_name": "raid5f" 00:20:51.184 } 00:20:51.184 ]' 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:51.184 256+0 records in 00:20:51.184 256+0 records out 00:20:51.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146771 s, 71.4 MB/s 00:20:51.184 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:51.185 256+0 records in 00:20:51.185 256+0 records out 00:20:51.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302354 s, 34.7 MB/s 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.185 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:51.443 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.444 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:51.702 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:51.960 malloc_lvol_verify 00:20:51.960 09:57:52 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:52.219 d398f66d-93cd-473b-896b-3ea1f0d46f78 00:20:52.219 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:52.477 03392e22-35f0-4cb0-8e6b-8f5ed878e942 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:52.477 /dev/nbd0 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:52.477 mke2fs 1.47.0 (5-Feb-2023) 00:20:52.477 Discarding device blocks: 0/4096 done 00:20:52.477 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:52.477 00:20:52.477 Allocating group tables: 0/1 done 00:20:52.477 Writing inode tables: 0/1 done 00:20:52.477 Creating journal (1024 blocks): done 00:20:52.477 Writing superblocks and filesystem accounting information: 0/1 done 00:20:52.477 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:52.477 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:52.735 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:52.735 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:52.735 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:52.735 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:52.735 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90511 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90511 ']' 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90511 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90511 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.736 killing process with pid 90511 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90511' 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90511 00:20:52.736 09:57:53 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90511 00:20:54.113 09:57:55 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:54.113 00:20:54.113 real 0m5.452s 00:20:54.113 user 0m7.359s 00:20:54.113 sys 0m1.285s 00:20:54.113 09:57:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.113 ************************************ 00:20:54.113 END TEST bdev_nbd 00:20:54.113 09:57:55 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:54.113 ************************************ 00:20:54.374 09:57:55 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:20:54.374 09:57:55 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:20:54.374 09:57:55 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:20:54.374 09:57:55 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:20:54.374 09:57:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.374 09:57:55 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.374 09:57:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:54.374 ************************************ 00:20:54.374 START TEST bdev_fio 00:20:54.374 ************************************ 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:54.374 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.374 09:57:55 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:54.374 ************************************ 00:20:54.374 START TEST bdev_fio_rw_verify 00:20:54.374 ************************************ 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:20:54.375 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:54.634 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:54.635 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:54.635 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:20:54.635 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:54.635 09:57:55 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:54.635 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:54.635 fio-3.35 00:20:54.635 Starting 1 thread 00:21:06.858 00:21:06.858 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90710: Wed Nov 27 09:58:06 2024 00:21:06.858 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:21:06.858 slat (nsec): min=17450, max=88326, avg=19412.62, stdev=1794.72 00:21:06.858 clat (usec): min=10, max=321, avg=129.41, stdev=45.95 00:21:06.858 lat (usec): min=29, max=345, avg=148.82, stdev=46.15 00:21:06.858 clat percentiles (usec): 00:21:06.858 | 50.000th=[ 133], 99.000th=[ 215], 99.900th=[ 241], 99.990th=[ 269], 00:21:06.858 | 99.999th=[ 297] 00:21:06.858 write: IOPS=12.9k, BW=50.5MiB/s (53.0MB/s)(499MiB/9874msec); 0 zone resets 00:21:06.858 slat (usec): min=7, max=297, avg=16.10, stdev= 3.89 00:21:06.858 clat (usec): min=56, max=1604, avg=299.73, stdev=49.01 00:21:06.858 lat (usec): min=71, max=1745, avg=315.83, stdev=50.56 00:21:06.858 clat percentiles (usec): 00:21:06.858 | 50.000th=[ 302], 99.000th=[ 383], 99.900th=[ 840], 99.990th=[ 1516], 00:21:06.858 | 99.999th=[ 1582] 00:21:06.858 bw ( KiB/s): min=47664, max=53912, per=98.68%, avg=51048.84, stdev=1446.39, samples=19 00:21:06.858 iops : min=11916, max=13478, avg=12762.21, stdev=361.60, samples=19 00:21:06.858 lat (usec) : 20=0.01%, 50=0.01%, 100=16.49%, 250=38.37%, 500=44.98% 00:21:06.858 lat (usec) : 750=0.09%, 1000=0.04% 00:21:06.858 lat (msec) : 2=0.03% 00:21:06.858 cpu : usr=98.90%, sys=0.43%, ctx=24, majf=0, minf=10103 00:21:06.858 IO depths : 1=7.6%, 2=19.8%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:06.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.858 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.858 issued rwts: total=123403,127702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.858 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:06.858 00:21:06.858 Run status group 0 (all jobs): 00:21:06.858 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:21:06.858 WRITE: bw=50.5MiB/s (53.0MB/s), 50.5MiB/s-50.5MiB/s (53.0MB/s-53.0MB/s), io=499MiB (523MB), run=9874-9874msec 00:21:07.430 ----------------------------------------------------- 00:21:07.430 Suppressions used: 00:21:07.430 count bytes template 00:21:07.430 1 7 /usr/src/fio/parse.c 00:21:07.430 474 45504 /usr/src/fio/iolog.c 00:21:07.430 1 8 libtcmalloc_minimal.so 00:21:07.430 1 904 libcrypto.so 00:21:07.430 ----------------------------------------------------- 00:21:07.430 00:21:07.430 00:21:07.430 real 0m12.906s 00:21:07.430 user 0m13.116s 00:21:07.430 sys 0m0.724s 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:07.430 ************************************ 00:21:07.430 END TEST bdev_fio_rw_verify 00:21:07.430 ************************************ 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "e66392c4-ee96-4ae3-bfef-2ed2b8bfa940"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "e66392c4-ee96-4ae3-bfef-2ed2b8bfa940",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "e66392c4-ee96-4ae3-bfef-2ed2b8bfa940",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "51d8878a-cf66-47ed-bebe-ed8ae40f448a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "47f629a9-18ab-47f9-9254-9e5d0e03277b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d7459c5e-779b-4400-8922-31534e1722d8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:07.430 /home/vagrant/spdk_repo/spdk 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:07.430 00:21:07.430 real 0m13.213s 00:21:07.430 user 0m13.249s 00:21:07.430 sys 0m0.863s 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.430 09:58:08 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:07.430 ************************************ 00:21:07.430 END TEST bdev_fio 00:21:07.430 ************************************ 00:21:07.691 09:58:08 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:07.691 09:58:08 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:07.691 09:58:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:07.691 09:58:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.691 09:58:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:07.691 ************************************ 00:21:07.691 START TEST bdev_verify 00:21:07.691 ************************************ 00:21:07.691 09:58:08 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:07.691 [2024-11-27 09:58:08.689918] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:21:07.691 [2024-11-27 09:58:08.690074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90874 ] 00:21:07.951 [2024-11-27 09:58:08.871923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:07.951 [2024-11-27 09:58:08.982505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.951 [2024-11-27 09:58:08.982544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.527 Running I/O for 5 seconds... 00:21:10.407 10497.00 IOPS, 41.00 MiB/s [2024-11-27T09:58:12.921Z] 10503.00 IOPS, 41.03 MiB/s [2024-11-27T09:58:13.859Z] 10507.67 IOPS, 41.05 MiB/s [2024-11-27T09:58:14.794Z] 10536.25 IOPS, 41.16 MiB/s [2024-11-27T09:58:14.794Z] 10553.40 IOPS, 41.22 MiB/s 00:21:13.661 Latency(us) 00:21:13.661 [2024-11-27T09:58:14.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.661 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:13.661 Verification LBA range: start 0x0 length 0x2000 00:21:13.661 raid5f : 5.02 4121.52 16.10 0.00 0.00 46786.50 273.66 37547.26 00:21:13.661 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:13.661 Verification LBA range: start 0x2000 length 0x2000 00:21:13.661 raid5f : 5.02 6442.96 25.17 0.00 0.00 29940.53 313.01 21749.94 00:21:13.661 [2024-11-27T09:58:14.794Z] =================================================================================================================== 00:21:13.661 [2024-11-27T09:58:14.794Z] Total : 10564.48 41.27 0.00 0.00 36514.60 273.66 37547.26 00:21:15.042 00:21:15.042 real 0m7.244s 00:21:15.042 user 0m13.367s 00:21:15.042 sys 0m0.288s 00:21:15.042 09:58:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.042 09:58:15 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:21:15.042 ************************************ 00:21:15.042 END TEST bdev_verify 00:21:15.042 ************************************ 00:21:15.042 09:58:15 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:15.042 09:58:15 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:15.042 09:58:15 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.042 09:58:15 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:15.042 ************************************ 00:21:15.042 START TEST bdev_verify_big_io 00:21:15.042 ************************************ 00:21:15.042 09:58:15 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:21:15.042 [2024-11-27 09:58:15.995355] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:21:15.042 [2024-11-27 09:58:15.995483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90972 ] 00:21:15.302 [2024-11-27 09:58:16.173585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:15.302 [2024-11-27 09:58:16.279131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.302 [2024-11-27 09:58:16.279166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.872 Running I/O for 5 seconds... 00:21:17.753 633.00 IOPS, 39.56 MiB/s [2024-11-27T09:58:20.280Z] 728.50 IOPS, 45.53 MiB/s [2024-11-27T09:58:21.219Z] 739.67 IOPS, 46.23 MiB/s [2024-11-27T09:58:22.159Z] 745.25 IOPS, 46.58 MiB/s [2024-11-27T09:58:22.159Z] 761.60 IOPS, 47.60 MiB/s 00:21:21.026 Latency(us) 00:21:21.026 [2024-11-27T09:58:22.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.026 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:21:21.026 Verification LBA range: start 0x0 length 0x200 00:21:21.026 raid5f : 5.32 334.38 20.90 0.00 0.00 9513122.62 224.48 406609.38 00:21:21.026 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:21:21.026 Verification LBA range: start 0x200 length 0x200 00:21:21.026 raid5f : 5.26 434.83 27.18 0.00 0.00 7404303.42 160.08 329683.28 00:21:21.026 [2024-11-27T09:58:22.159Z] =================================================================================================================== 00:21:21.026 [2024-11-27T09:58:22.159Z] Total : 769.21 48.08 0.00 0.00 8326911.82 160.08 406609.38 00:21:22.410 00:21:22.410 real 0m7.558s 00:21:22.410 user 0m14.003s 00:21:22.410 sys 0m0.292s 00:21:22.410 09:58:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.410 09:58:23 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 ************************************ 00:21:22.410 END TEST bdev_verify_big_io 00:21:22.410 ************************************ 00:21:22.410 09:58:23 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:22.410 09:58:23 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:22.410 09:58:23 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.410 09:58:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:22.410 ************************************ 00:21:22.410 START TEST bdev_write_zeroes 00:21:22.410 ************************************ 00:21:22.410 09:58:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:22.671 [2024-11-27 09:58:23.627056] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:21:22.671 [2024-11-27 09:58:23.627162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91066 ] 00:21:22.671 [2024-11-27 09:58:23.800556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.931 [2024-11-27 09:58:23.908820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.500 Running I/O for 1 seconds... 00:21:24.440 29991.00 IOPS, 117.15 MiB/s 00:21:24.440 Latency(us) 00:21:24.440 [2024-11-27T09:58:25.573Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.440 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:21:24.440 raid5f : 1.01 29968.58 117.06 0.00 0.00 4258.35 1237.74 5780.90 00:21:24.440 [2024-11-27T09:58:25.573Z] =================================================================================================================== 00:21:24.440 [2024-11-27T09:58:25.573Z] Total : 29968.58 117.06 0.00 0.00 4258.35 1237.74 5780.90 00:21:25.820 00:21:25.820 real 0m3.215s 00:21:25.820 user 0m2.829s 00:21:25.820 sys 0m0.258s 00:21:25.820 09:58:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.820 09:58:26 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:21:25.820 ************************************ 00:21:25.820 END TEST bdev_write_zeroes 00:21:25.820 ************************************ 00:21:25.820 09:58:26 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:25.820 09:58:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:25.820 09:58:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.820 09:58:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.820 ************************************ 00:21:25.820 START TEST bdev_json_nonenclosed 00:21:25.820 ************************************ 00:21:25.820 09:58:26 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:25.820 [2024-11-27 09:58:26.919580] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:21:25.820 [2024-11-27 09:58:26.919688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91119 ] 00:21:26.080 [2024-11-27 09:58:27.094130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.080 [2024-11-27 09:58:27.203662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.080 [2024-11-27 09:58:27.203754] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:21:26.080 [2024-11-27 09:58:27.203779] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:26.080 [2024-11-27 09:58:27.203789] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:26.340 00:21:26.340 real 0m0.615s 00:21:26.340 user 0m0.375s 00:21:26.340 sys 0m0.134s 00:21:26.340 09:58:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.340 09:58:27 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:21:26.340 ************************************ 00:21:26.340 END TEST bdev_json_nonenclosed 00:21:26.340 ************************************ 00:21:26.601 09:58:27 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:26.601 09:58:27 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:21:26.601 09:58:27 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.601 09:58:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:26.601 ************************************ 00:21:26.601 START TEST bdev_json_nonarray 00:21:26.601 ************************************ 00:21:26.601 09:58:27 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:21:26.601 [2024-11-27 09:58:27.597891] Starting SPDK v25.01-pre git sha1 597702889 / DPDK 24.03.0 initialization... 00:21:26.601 [2024-11-27 09:58:27.598007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91149 ] 00:21:26.860 [2024-11-27 09:58:27.774361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.861 [2024-11-27 09:58:27.880384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.861 [2024-11-27 09:58:27.880486] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:21:26.861 [2024-11-27 09:58:27.880503] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:21:26.861 [2024-11-27 09:58:27.880520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:27.121 00:21:27.121 real 0m0.604s 00:21:27.121 user 0m0.367s 00:21:27.121 sys 0m0.132s 00:21:27.121 09:58:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.121 09:58:28 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:21:27.121 ************************************ 00:21:27.121 END TEST bdev_json_nonarray 00:21:27.121 ************************************ 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:21:27.121 09:58:28 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:21:27.121 00:21:27.121 real 0m47.758s 00:21:27.121 user 1m4.395s 00:21:27.121 sys 0m5.005s 00:21:27.121 09:58:28 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:27.121 09:58:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:27.121 ************************************ 00:21:27.121 END TEST blockdev_raid5f 00:21:27.121 ************************************ 00:21:27.381 09:58:28 -- spdk/autotest.sh@194 -- # uname -s 00:21:27.381 09:58:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:27.381 09:58:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:27.381 09:58:28 -- common/autotest_common.sh@10 -- # set +x 00:21:27.381 09:58:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:27.381 09:58:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:27.381 09:58:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:27.381 09:58:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:27.381 09:58:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:27.381 09:58:28 -- common/autotest_common.sh@10 -- # set +x 00:21:27.381 09:58:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:27.381 09:58:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:27.381 09:58:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:27.381 09:58:28 -- common/autotest_common.sh@10 -- # set +x 00:21:29.919 INFO: APP EXITING 00:21:29.920 INFO: killing all VMs 00:21:29.920 INFO: killing vhost app 00:21:29.920 INFO: EXIT DONE 00:21:30.179 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.179 Waiting for block devices as requested 00:21:30.179 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:30.439 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.379 Cleaning 00:21:31.379 Removing: /var/run/dpdk/spdk0/config 00:21:31.379 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:31.379 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:31.379 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:31.379 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:31.379 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:31.379 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:31.379 Removing: /dev/shm/spdk_tgt_trace.pid57069 00:21:31.379 Removing: /var/run/dpdk/spdk0 00:21:31.379 Removing: /var/run/dpdk/spdk_pid56822 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57069 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57303 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57418 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57474 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57608 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57631 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57841 00:21:31.379 Removing: /var/run/dpdk/spdk_pid57965 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58077 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58205 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58318 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58358 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58400 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58476 00:21:31.379 Removing: /var/run/dpdk/spdk_pid58599 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59048 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59123 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59210 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59226 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59386 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59412 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59569 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59585 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59660 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59678 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59753 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59771 00:21:31.379 Removing: /var/run/dpdk/spdk_pid59972 00:21:31.379 Removing: /var/run/dpdk/spdk_pid60008 00:21:31.379 Removing: /var/run/dpdk/spdk_pid60097 00:21:31.379 Removing: /var/run/dpdk/spdk_pid61463 00:21:31.379 Removing: /var/run/dpdk/spdk_pid61674 00:21:31.379 Removing: /var/run/dpdk/spdk_pid61814 00:21:31.379 Removing: /var/run/dpdk/spdk_pid62463 00:21:31.379 Removing: /var/run/dpdk/spdk_pid62670 00:21:31.379 Removing: /var/run/dpdk/spdk_pid62816 00:21:31.379 Removing: /var/run/dpdk/spdk_pid63459 00:21:31.379 Removing: /var/run/dpdk/spdk_pid63795 00:21:31.379 Removing: /var/run/dpdk/spdk_pid63939 00:21:31.639 Removing: /var/run/dpdk/spdk_pid65331 00:21:31.639 Removing: /var/run/dpdk/spdk_pid65584 00:21:31.639 Removing: /var/run/dpdk/spdk_pid65730 00:21:31.639 Removing: /var/run/dpdk/spdk_pid67115 00:21:31.639 Removing: /var/run/dpdk/spdk_pid67368 00:21:31.639 Removing: /var/run/dpdk/spdk_pid67519 00:21:31.639 Removing: /var/run/dpdk/spdk_pid68911 00:21:31.639 Removing: /var/run/dpdk/spdk_pid69351 00:21:31.639 Removing: /var/run/dpdk/spdk_pid69497 00:21:31.639 Removing: /var/run/dpdk/spdk_pid70990 00:21:31.639 Removing: /var/run/dpdk/spdk_pid71251 00:21:31.639 Removing: /var/run/dpdk/spdk_pid71402 00:21:31.639 Removing: /var/run/dpdk/spdk_pid72904 00:21:31.639 Removing: /var/run/dpdk/spdk_pid73164 00:21:31.639 Removing: /var/run/dpdk/spdk_pid73310 00:21:31.639 Removing: /var/run/dpdk/spdk_pid74799 00:21:31.639 Removing: /var/run/dpdk/spdk_pid75279 00:21:31.639 Removing: /var/run/dpdk/spdk_pid75425 00:21:31.639 Removing: /var/run/dpdk/spdk_pid75575 00:21:31.639 Removing: /var/run/dpdk/spdk_pid75993 00:21:31.639 Removing: /var/run/dpdk/spdk_pid76731 00:21:31.639 Removing: /var/run/dpdk/spdk_pid77133 00:21:31.639 Removing: /var/run/dpdk/spdk_pid77819 00:21:31.639 Removing: /var/run/dpdk/spdk_pid78260 00:21:31.639 Removing: /var/run/dpdk/spdk_pid79025 00:21:31.639 Removing: /var/run/dpdk/spdk_pid79459 00:21:31.639 Removing: /var/run/dpdk/spdk_pid81442 00:21:31.639 Removing: /var/run/dpdk/spdk_pid81886 00:21:31.639 Removing: /var/run/dpdk/spdk_pid82331 00:21:31.639 Removing: /var/run/dpdk/spdk_pid84435 00:21:31.639 Removing: /var/run/dpdk/spdk_pid84919 00:21:31.639 Removing: /var/run/dpdk/spdk_pid85439 00:21:31.639 Removing: /var/run/dpdk/spdk_pid86508 00:21:31.639 Removing: /var/run/dpdk/spdk_pid86831 00:21:31.639 Removing: /var/run/dpdk/spdk_pid87781 00:21:31.639 Removing: /var/run/dpdk/spdk_pid88110 00:21:31.639 Removing: /var/run/dpdk/spdk_pid89055 00:21:31.639 Removing: /var/run/dpdk/spdk_pid89382 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90060 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90342 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90409 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90450 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90695 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90874 00:21:31.639 Removing: /var/run/dpdk/spdk_pid90972 00:21:31.639 Removing: /var/run/dpdk/spdk_pid91066 00:21:31.639 Removing: /var/run/dpdk/spdk_pid91119 00:21:31.639 Removing: /var/run/dpdk/spdk_pid91149 00:21:31.639 Clean 00:21:31.899 09:58:32 -- common/autotest_common.sh@1453 -- # return 0 00:21:31.899 09:58:32 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:31.899 09:58:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.899 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:21:31.899 09:58:32 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:31.899 09:58:32 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.899 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:21:31.899 09:58:32 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:31.899 09:58:32 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:31.899 09:58:32 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:31.899 09:58:32 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:31.899 09:58:32 -- spdk/autotest.sh@398 -- # hostname 00:21:31.899 09:58:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:32.159 geninfo: WARNING: invalid characters removed from testname! 00:21:58.724 09:58:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:58.724 09:58:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:00.107 09:59:01 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.647 09:59:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.559 09:59:05 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:07.102 09:59:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.010 09:59:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:09.010 09:59:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:09.010 09:59:09 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:09.010 09:59:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:09.010 09:59:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:09.010 09:59:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:09.010 + [[ -n 5439 ]] 00:22:09.010 + sudo kill 5439 00:22:09.020 [Pipeline] } 00:22:09.039 [Pipeline] // timeout 00:22:09.045 [Pipeline] } 00:22:09.059 [Pipeline] // stage 00:22:09.064 [Pipeline] } 00:22:09.080 [Pipeline] // catchError 00:22:09.089 [Pipeline] stage 00:22:09.091 [Pipeline] { (Stop VM) 00:22:09.103 [Pipeline] sh 00:22:09.386 + vagrant halt 00:22:11.941 ==> default: Halting domain... 00:22:20.117 [Pipeline] sh 00:22:20.437 + vagrant destroy -f 00:22:22.976 ==> default: Removing domain... 00:22:22.990 [Pipeline] sh 00:22:23.275 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:22:23.285 [Pipeline] } 00:22:23.302 [Pipeline] // stage 00:22:23.310 [Pipeline] } 00:22:23.326 [Pipeline] // dir 00:22:23.334 [Pipeline] } 00:22:23.350 [Pipeline] // wrap 00:22:23.357 [Pipeline] } 00:22:23.370 [Pipeline] // catchError 00:22:23.382 [Pipeline] stage 00:22:23.384 [Pipeline] { (Epilogue) 00:22:23.401 [Pipeline] sh 00:22:23.688 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:27.902 [Pipeline] catchError 00:22:27.905 [Pipeline] { 00:22:27.921 [Pipeline] sh 00:22:28.207 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:28.207 Artifacts sizes are good 00:22:28.218 [Pipeline] } 00:22:28.233 [Pipeline] // catchError 00:22:28.244 [Pipeline] archiveArtifacts 00:22:28.251 Archiving artifacts 00:22:28.375 [Pipeline] cleanWs 00:22:28.386 [WS-CLEANUP] Deleting project workspace... 00:22:28.386 [WS-CLEANUP] Deferred wipeout is used... 00:22:28.392 [WS-CLEANUP] done 00:22:28.394 [Pipeline] } 00:22:28.409 [Pipeline] // stage 00:22:28.415 [Pipeline] } 00:22:28.432 [Pipeline] // node 00:22:28.439 [Pipeline] End of Pipeline 00:22:28.484 Finished: SUCCESS